title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Pruning for GNNs: Lower Complexity with Comparable Expressiveness
Accept (poster)
Summary: In this paper, the author proposed pruned MPNNs, K-path GNNs, and K-hop GNNs to reduce the computation redundancy in the original method. The author proved the expressive equivalence between the original version and the pruned version. The proposed method is evaluated on various benchmarking datasets and shows comparable results to the original version. ## Update after rebuttal I would like to thank the authors for the detailed response to my questions and concerns. The author provides both a textual explanation for the complexity analysis of their method and the experimental comparison result. This helps to solve my biggest concern about the proposed method. I would like to increase my original score. Claims And Evidence: 1. In the title, the authors claim that the pruned GNNs have higher expressiveness. However, I think it’s kind of misleading, as the expressive power of pruned GNNs is bounded by the original version based on the proofs provided in the paper. It is only safe to say that given the limited number of layers/iteration, the pruned version is better than the original one. 2. In Table 5, the authors claim that the pruned version has better complexity than the original version. However, I am not convinced. First, I do not understand why the space complexity is associated with the distance $L$. Since for both MPNN, K-path GNNs, and K-hop GNNs or their pruned version, we only need to save embedding for each node in the graph, which results in $O(n)$. For time complexity, it’s also not correct, as we need to consider the density of the graph. Let’s use the average degree $d$ to characterize. For MPNNs, it should be $O(ndL)$. For PR MPNNs, it seems to me that when we need to aggregate higher order neighbors, the complexity of the aggregation will go up, which means that the complexity is different for different layers. The first layer is just $nd$. For the second layer, we will aggregate neighbors from the second hop, which results in $nd^2$. Overall, it should be $O(\sum_{log(L)}^{i}nd^{2^{i-1}})$ . Similarly, the analysis for K-path and K-hop GNNs is also not correct to me. Given my understanding of the complexity part, I don’t think the proposed pruned framework will result in huge computation improvement over the original version. Even worse, the pruned version requires aggregation of different neighbors for different layers, which introduces additional pre-processing time and is not flexible for adjusting the number of layers. 3. The author mentioned that fewer layers can reduce the nonlinearity of GNNs and reduce over-smoothing. However, it is a problem primarily for node tasks. But the proposed methods are mainly used in Graph-level tasks. Methods And Evaluation Criteria: 1. As mentioned above, although the proposed method requires fewer layers to achieve certain expressive power, the complexity is varied and much higher than the pruned version for each layer. Therefore, it is hard to see that the proposed method is more efficient. 2. The pruning strategy seems totally different for different GNNs. Are there any common criteria for pruning so that we can apply a similar strategy to other GNNs? 3. Even if the proposed method does introduce efficiency due to the fewer number of layers. The application is limited. Specifically, the advantage is $log(L)$ over $L$. However, real-world graph tasks are usually molecules, whose $L$ are usually less than 10. Therefore, it seems not a big improvement to me. Theoretical Claims: The complexity analysis seems wrong to me. See above. Given the limited time, I cannot check the proofs of all theorems. All other conclusions seem correct to me. Experimental Designs Or Analyses: 1. I believe a more comprehensive ablation study on the complexity is **necessary** for showing the potential of the proposed methods. Specifically, I would like to see the comparison of the original version and the pruned version on both pre-process, train, and inference time under different numbers of layers and datasets (different graph distribution). Meanwhile, the cost of space should also be evaluated to support Table 5. 2. I would suggest another ablation study between the number of layers and the performance using both the original version and the pruned version using both the simulation data and real-world data to see if the pruned version really has better performance, especially under a imited number of layers. Supplementary Material: I did not see any supplementary material, and there is no code provided. Relation To Broader Scientific Literature: If the method really has a computation advantage over the existing method, it can be used in the field of molecular or biomedical research. Essential References Not Discussed: The topic is related, and the experiment protocol is almost identical to the existing paper [1], but it is not cited, discussed, and compared. [1] Jiarui Feng, et al., How powerful are K-hop message passing graph neural networks, NeurIPS22. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their feedback and constructive comments Claims And Evidence: (2)Space Complexity:During backpropagation in training, the node representations of intermediate layers (e.g., activation values) are required to compute gradients and update weights. If intermediate node representations are discarded, backpropagation cannot proceed. Therefore node representations from each layer need to be cached, which results in $O(n·L)$. Time Complexity:We speculate the reviewer might draw conclusion:$\sum^i_{log(L)}O(nd^{2^{i-1}})$ due to the divergence in computational paradigms. Suppose aggreation method as sum. In the reviewer’s view, the aggregation of $l$ times is computed as follow:(1)Compute the $l$-th power of the adjacency matrix $A^l$.(2)Calculate the product of $A^l$ and node representations $h$ as $A^l·h$. This computation method involves calculating the product between matrices L times, resulting in extremely high complexity (even if A is a sparse matrix). However in our approch (mentioned in Equation (15) on page 5), the aggregation is decomposed into $l$ times of 1-neighborhood aggregation. In other word, $A^l·h$ is computed as: $for$ $i$ $in$ $[l]$ $do$: $h_i=A·h_{i-1}$ Our approch will only compute the product between matrice and node representations vector. Thus, it achieves lower computational complexity compared to the approach above. Therefore the complexity of aggregation is $\sum^i_{log(L)}O(2^{i-1}nd)=O(ndL)$. We provided pruned architectures better visualized in the link. As for reviewer's concern for complexity, we decompose the computational time complexity into two components: (1) feature aggregation($O_{agg}(1)$ refers to one basic aggreation for a node) and (2) the MLP operations($O_{MLP}(1)$ refer one layer's MLP) and space complexity ($O_{spa}(1)$ refers to one node's represetations). Table 5 can be modified into the following table(we assume K<<L): |Training Complexity | MP-GNN | PR MP-GNN |K-HOP|PR K-HOP|K-PATH|PR K-PATH| |----------|----------|----------|----------|----------|----------|----------| | Time | $O_{agg}(nL)+O_{MLP}(L)$ | $O_{agg}(nL)+O_{MLP}(log(L))$|$O_{agg}(nL)+O_{MLP}(L/K)$ | $O_{agg}(nL/K)+O_{MLP}(L/K)$ | $O_{agg}(nL)+O_{MLP}(L/K)$ | $O_{agg}(nL/K)+O_{MLP}(L/K)$ | | Space | $O_{spa}(nL) $ | $O_{spa}(nlog(L)) $ | $O_{spa}(nL/K) $ | $O_{spa}(nL/K) $ | $O_{spa}(nL/K) $ | $O_{spa}(nL/K) $ | On the other side, in our ablation experiment, the time consumed by aggregation is not significant. As for adjusting the number of layers, $a_l$ doesn's have to be $2^{l-1}$, we point out $2^{l-1}$ because it will reach its maximum expressiveness. Theorem4.1has shown as long as it is viewable, it will be as powerful as WL. Hence we can flexibly adjust $a_l$ as geometric sequence($a_l=l$) or other viewable sequence. (1):The Initial intention of the title derives from the comparasion of MP-GNN and PR K-PATH GNN instead of MP-GNN and PR MP-GNN: from(2), if K<<L, the complexity of PR K-PATH GNN is less than MP-GNN, while PR K-PATH is as powerful as K-PATH and K-PATH is stirctly more powerful than MP-GNN. We sincerely appreciate the reviewer's insightful comment regarding the potential ambiguity in the paper's title. In response to the reviewer's suggestion, we would like to modify the title to "PRUNING For GNNs: LOWER COMPLEXITY WITH COMPARABLE EXPRESSIVENESS" to better reflect the study's focus. (3)The pruned methods can also be used in Node-level tasks, since the graph representation is a aggreation of node representations. In other word, given a Pruned MP-GNN $M'$ as powerful as arbitrary MP-GNN $M$, if for any two nodes they get same representation $M$ it will also get same representation in $M'$. Methods And Evaluation Criteria: (1)It has been mentioned above (2) The pruning strategy seems different because MP-GNN is one-aggreation while K-HOP\PATH is Multi-aggregation. Our pruned method for MP-GNN is suitable for any one-aggreation GNN, but pruned method for Multi-aggregation is associations between different aggregated neighbor nodes (can not guarantee the expressive equivalence). (3)The reason why GNNs typically aggregate node information within a distance L≤10 is that larger L values lead to issues such as excessive parameter size (prone to overfitting on small datasets) and high computational/memory costs. However, our pruning method reduces the parameters to a logarithmic scale (log(L)),thereby potentially enabling GNNs to aggregate information from more distant nodes Experimental Designs Or Analyses:We have supplemented the ablation experiments on pruning efficiency and conducted experiments on the large-scale graph OGB-arXiv. The experimental conclusion can be found in our response to other reviewers. All the experimental materials(code), results, the better visualized pruned architectures and responses to other questions are provided in https://anonymous.4open.science/r/PrunedGNN-AC61/README.md --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed clarification and response. Most of my concerns have been addressed. I will increase my score accordingly. Please include all the additional results in the future version. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your generous feedback and for raising your evaluation of my work. I greatly appreciate the time and thought you invested in reviewing my manuscript and offering such valuable suggestions. Your support is truly meaningful, and I’ve found the revision process very rewarding thanks to your insights.
Summary: The paper proposes a pruning framework for GNNs aimed at improving computational efficiency while maintaining or even enhancing expressive power. The authors introduce pruned versions of Message Passing GNNs (MP-GNNs), K-Path GNNs, and K-Hop GNNs by identifying and removing redundant structures. Theoretical analysis using MATLANG demonstrates that pruned versions retain the expressive power of their unpruned counterparts, and experiments on multiple datasets validate the efficiency gains. The main contributions include: - A theoretical justification for pruning in GNNs without sacrificing expressive power. - A pruned message passing framework that maintains equivalent expressiveness while reducing complexity. - Empirical validation showing that pruned frameworks outperform or match unpruned models across benchmark datasets with lower computational cost. Claims And Evidence: The authors claim that: - Pruning redundant structures does not reduce expressive power - This is well-supported by theoretical analysis using MATLANG and empirical validation on graph isomorphism tasks. - Pruned models have lower computational complexity - The paper provides clear evidence through algorithmic complexity analysis and runtime measurements. - Pruned K-Hop and K-Path GNNs distinguish more non-isomorphic graphs than MP-GNNs - While theoretically sound, this claim could benefit from more empirical results on diverse graph structures. - Pruning improves training efficiency and scalability - The results support this claim, but more large-scale graph evaluations (e.g., OGB datasets) would strengthen it. Overall, the claims are mostly well-supported, but additional empirical validation, particularly on large-scale datasets, would reinforce the conclusions. Methods And Evaluation Criteria: The methodology is well-structured, with clear mathematical definitions and derivations. The pruning strategies are rigorously developed, and the expressiveness analysis is grounded in matrix algebra. The evaluation criteria include: - Expressiveness tests: Distinguishing non-isomorphic graphs. - Graph property prediction: Evaluating node and graph features. - Real-world benchmarks: TU datasets, QM9, and ZINC. The chosen benchmarks and evaluation metrics are appropriate, though additional experiments on larger-scale datasets would provide further validation. Theoretical Claims: The paper provides several key theoretical results: - Proof that pruned MP-GNNs retain 1-WL equivalence - The proof is logically structured and appears correct. - Equivalence of pruned K-Path GNNs to standard K-Path GNNs - This is rigorously shown via MATLANG. - Pruned K-Hop GNNs retain equivalence for distinguishing regular graphs - The proof is incomplete due to lost structural information. - Computational complexity analysis - The theoretical derivations appear sound. Experimental Designs Or Analyses: The experiments are well-designed but could be improved in a few areas: - Expressiveness experiments: The graph isomorphism tests effectively demonstrate the theoretical claims. - Graph property prediction: Results on the TU datasets and QM9 confirm that pruning does not degrade performance. - Computational efficiency: The paper successfully demonstrates reduced parameter count and runtime improvements. However, experiments on larger graphs (e.g., OGB datasets) and ablation studies on different pruning strategies would strengthen the empirical claims. Supplementary Material: The supplementary material was not extensively reviewed, but it includes: - Detailed proofs for theoretical claims. - Extended experiments and hyperparameter details. - MATLANG derivations used in expressiveness proofs. Reviewing the correctness of all proofs would require additional time, but the main arguments seem sound. Relation To Broader Scientific Literature: The paper is well-grounded in the existing literature on GNN expressiveness and efficiency: - Expressiveness limits of MP-GNNs (Xu et al., 2019; Morris et al., 2019) - The pruning approach aligns with previous findings that message-passing GNNs are constrained by 1-WL limitations. - Higher-order GNNs (Maron et al., 2019; Azizian & Lelarge, 2021) - The authors position their work as a computationally efficient alternative to higher-order approaches. - Subgraph-based methods (Bevilacqua et al., 2022; Zhao et al., 2022) - The paper could better contrast pruning with subgraph-based improvements in expressiveness. Essential References Not Discussed: The following studies could be cited, as they present different pruning techniques that can be compared against the proposed approach in the paper: [1] Dupty et al., PF-GNN: Differentiable particle filtering based approximation of universal graph representations, ICLR 2022. [2] Fey et al., GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings, ICML 2021. [3] Müller et al., GraphChef: Decision-Tree Recipes to Explain Graph Neural Networks, ICLR 2024. Other Strengths And Weaknesses: **Strengths** - The paper introduces a novel pruning strategy for GNNs, which reduces computational complexity while preserving expressive power. Unlike existing methods that enhance expressiveness by increasing depth or complexity, this approach optimizes efficiency without sacrificing performance. - The paper is well-structured, with clear theoretical justifications, proofs, and empirical results supporting the claims. - The extensive experimentation across synthetic and real-world datasets strengthens the findings, particularly for efficiency improvements. - If widely adopted, the proposed pruning strategies could make expressive GNNs more computationally feasible for large-scale applications, such as molecular modeling. **Weaknesses** - While the theoretical analysis supports the expressiveness claims, the experimental validation relies mostly on synthetic datasets and indirect comparisons (e.g., isomorphism tests). Direct comparisons on real-world tasks requiring high expressiveness (e.g., molecular property prediction) would strengthen the argument. A more comprehensive ablation study showing how different pruning levels affect expressiveness would be useful. - The efficiency claims are theoretically sound, but the experimental validation on large-scale graphs is somewhat lacking. How well do the pruned frameworks generalize to datasets with millions of nodes and edges? - In particular, K-Hop pruning loses information about non-shortest paths. While the authors argue that this is not critical, further empirical evidence would help support this claim. Other Comments Or Suggestions: Please refer to the weaknesses mentioned above. Questions For Authors: 1. Are there any known cases where pruned K-Hop fails to distinguish graphs that the original K-Hop framework can differentiate? Providing examples would help clarify the limitations. 2. The claim that pruning redundant structures enhances efficiency without compromising expressiveness is central to the paper. Could the authors conduct an ablation study comparing different pruning strategies to determine whether all identified redundant structures should be removed? 3. Have the authors tested how pruning affects GNNs trained on very large graphs (e.g., OGB datasets)? If so, do the training and inference times scale as expected? 4. The experimental results validate the effectiveness of pruned frameworks but do not directly measure expressiveness beyond isomorphism-based tasks. Could the authors evaluate the frameworks on tasks that require high expressiveness, such as molecular property prediction with long-range dependencies? 5. Could there be datasets where pruning negatively impacts expressiveness? If so, an analysis of when pruning is beneficial versus detrimental would strengthen the conclusions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely acknowledge reviewer V1ZW for the constructive criticism and insightful suggestions. Questions For Authors: Q1: Unfortulately, we have to admit that find a pair of graphs that pruned K-Hop fails to distinguish graphs and the original K-Hop framework can differentiate involves a tremendous amount of algebraic analytic work. Since the proof of the theorem of the existance of a pair of graphs distinguishable by the (k+1)-dimensional Weisfeiler-Lehman ((k+1)-WL) test but not by the k-dimensional Weisfeiler-Lehman (k-WL) test spans 58 pages. However, even if it exists, we can draw the conclusion that the probability for inconsistent expressiveness between K-Hop and pruned K-Hop goes to 0 as the size of graphs n goes to infinity. As shown in the respond to Q2, we also conducted experiments on this issue real-world and synthetic datasets, all of the results show that they have same expressiveness. Q2: We have added experiments to verify the consistency of the expressive power between the pruned architecture and the original algorithm on both real and synthetic datasets. The experimental results show that the expressive power of the pruned architecture is identical to that of the original algorithm. We also conducted ablation experiments for GNNs, result shows the pruned version achieves significant reductions in both parameter count and training duration, showing noticeable efficiency improvements. We have supplemented ablation experiments analyzing the impact of different pruning strategies on model efficiency and accuracy. We summarize the core conclusion:(1)Regarding to the expressiveness(Graph Isomorphism WL test), all identified redundant structures should be removed since it will enhance efficiency.(2)When it comes to the performances(accuracy), sometimes retaining certain redundant structures can slightly improve the model's accuracy. The reason is that if all identified redundant structures are removed, the subsequent layers will need to aggregate a large amount of representations. Since MLP are not strict hash functions, this can degrade the model's performance. As for pruning strategies for MP-GNNs, we point out that the sequence $a_l$ can be flexibly adjusted, as long as $a_l$ is viewable(The subset's sum of {$a_l$} is dense in $[S_l]$), such as geometric sequence($a_l=l$) or fibonacci sequence($a_l=a_{l-1}+a_{l-2}$), then it have same expressiveness as MP-GNN. Q3:We have tested how pruning affects GNNs trained on very large graphs on dataset obg-arvix. The result shows that while maintaining comparable accuracy to the original model, the pruned version achieves significant reductions in both parameter count and training duration, showing noticeable efficiency improvements. Q4:As shown in Q2, we have conducted WL Test on both real and synthetic datasets. For the pruned WL algorithm, it is considered accurate only when its graph output results are entirely consistent with the original algorithm. This requires high expressiveness. Meanwhile, experimental results demonstrate a significant improvement in the efficiency of the pruned algorithm. Q5:During the experiments, the performance for 4 layers [1,2,3,4] Pruned GIN often performs worse than 3 layers [1,2,3] Pruned GIN, We find out the reason is, when the layer gets deeper,a substantial number of neighbor representations will be repeatedly aggregated. This significantly impedes the model's ability to extract useful information. Consequently, we refined the model to eliminate redundant representation aggregation, thereby enhancing Pruned GIN's performance. Weaknesses:In particular, K-Hop pruning loses information about non-shortest paths. While the authors argue that this is not critical, further empirical evidence would help support this claim. Our original intention was to point out that, compared to the K-Path framework, K-Hop loses information about non-shortest paths. As a result, we cannot prove that the pruned K-Hop is equivalent in expressive power to the original model. However, this does not have a significant impact. Firstly, we demonstrate the equivalence between the pruned K-Hop and the original model for regular graphs and strongly regular graphs—while the K-Hop model was specifically designed to address the inability of MP-GNNs to distinguish regular graphs. Secondly, in our ablation experiments, the pruned K-Hop exhibits equivalent expressive power to the original model for both real-world and Synthetic experiments. Compared to K-Path, K-Hop indeed is less powerful than K-Path, since K-Path asigned more number of classes during the WL expressiveness experiment. All the experimental materials (code), results and responses to References in provided in https://anonymous.4open.science/r/PrunedGNN-AC61/README.md Thanks again, for the reviewer's detailed analysis and questions, which allowed me to further refine the model in the ogbn-arxiv experiments. --- Rebuttal Comment 1.1: Comment: Thank you for your considerate answers to my questions. I truly appreciate the time and effort you took to address my concerns. I will be keeping my original score. --- Reply to Comment 1.1.1: Comment: I’m very grateful for your thoughtful comments. Your feedback significantly contributed to refining the manuscript, and I truly value your support throughout the revision process.
Summary: This paper proposes pruned versions of Message Passing GNNs, K-Hop GNNs, and K-Path GNNs by eliminating redundant structures. The authors claim that these pruned frameworks maintain or even improve expressive power while reducing computational complexity. Theoretical analysis based on matrix language is used to demonstrate equivalence in expressiveness between the pruned and original frameworks. Additionally, experimental results on benchmark datasets show that pruned GNNs achieve comparable or better performance with improved efficiency. Claims And Evidence: The paper claims that pruning redundant structures in MP-GNNs, K-Hop GNNs, and K-Path GNNs maintains expressiveness while reducing complexity. While some theoretical arguments support this claim, the practical significance is not well justified. Methods And Evaluation Criteria: The methodology is theoretically sound in using matrix language tools to analyze expressiveness. However, the evaluation mainly focuses on standard benchmark datasets without strong ablation studies or comparisons with more sophisticated recent baselines. Theoretical Claims: The theoretical analysis claims that pruned GNNs maintain the expressive power of the original architectures. Experimental Designs Or Analyses: The experimental setup evaluates pruned GNNs on several benchmark datasets. However, the improvements in efficiency are marginal, and the comparisons do not convincingly demonstrate practical benefits over existing optimized GNN models. I do not think the TUdatasets are enough to evaluate the method. The authors should consider OGB datasets for evaluations. Supplementary Material: I read the supplementary material. Relation To Broader Scientific Literature: It improves the expressiveness of GNNs (broader literature) while reducing complexity. Essential References Not Discussed: No significant gaps found. Other Strengths And Weaknesses: The theoretical analysis provides an interesting perspective on expressiveness equivalence. But the empirical evaluations are limited. Other Comments Or Suggestions: Consider adding experiments on larger, real-world datasets to better illustrate the efficiency gains. Questions For Authors: Have you tested the approach on large-scale graphs where efficiency improvements might be more noticeable? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer hERx careful evaluation and meaningful suggestions. Q1:The methodology is theoretically sound in using matrix language tools to analyze expressiveness. However, the evaluation mainly focuses on standard benchmark datasets without strong ablation studies or comparisons with more sophisticated recent baselines. To verify the expressive power equivalence between the pruned WL test and their original algorithms, We have added experiments to verify the consistency of the expressive power between the pruned architecture and the original algorithm on both real and synthetic datasets. The experimental results show that the expressive power of the pruned architecture is almost identical to that of the original algorithm. Additionally, we have evaluated the efficiency improvements of the pruned architecture compared to the original architecture in the WL algorithm and as a GNN model. The results show that under the WL algorithm, the pruned architecture maintains the same expressive power as the original algorithm while significantly reducing computation time. As for GNN, We compared the pruned model in terms of accuracy and training time, and the results show that the pruned models achieve comparable accuracy to the original model, while most pruned variants demonstrate better training efficiency than the baseline. Some of the results are shown below. | Model | COLLAB Time |COLLAB Acc (%) | NCI1 Time | NCI1 Acc (%) | IMDB-B Time | IMDB-B Acc (%) | IMDB-M Time | IMDB-M Acc (%) | MUTAG Time | MUTAG Acc (%) | PROTEINS Time | PROTEINS Acc (%) | |---------------|--------|---------|--------|---------|--------|---------|--------|---------|--------|---------|--------|---------| | GIN(3) | 1.104 | 74.8 ± 1.3 | 0.480 | 71.9 ± 0.5 | 0.251 | 71.9 ± 0.3 | 0.304 | 49.9 ± 0.0 | 0.889 | 89.4 ± 0.4 | 0.268 | 73.7 ± 0.7 | | PR GIN(1) | 1.060 | 73.9 ± 0.0 | 0.461 | 72.9 ± 1.4 | 0.209 | 69.9 ± 2.0 | 0.284 | 50.6 ± 0.3 | 0.886 | 88.5 ± 0.0 | 0.233 | 72.2 ± 1.9 | | GIN(7) | 1.638 | 77.4 ± 1.6 | 0.748 | 71.5 ± 1.4 | 0.578 | 72.6 ± 0.3 | 0.534 | 51.1 ± 0.3 | 0.904 | 89.4 ± 1.0 | 0.527 | 76.3 ± 0.2 | | PR GIN(124) | 1.284 | 76.4 ± 0.7 | 0.6961 | 75.4 ± 0.2 | 0.481 | 71.7 ± 1.4 | 0.464 | 52.0 ± 0.5 | 0.916 | 92.0 ± 0.4 | 0.425 | 74.1 ± 1.0 | | GIN(10) | 2.142 | 74.7 ± 0.6 | 1.122 | 75.9 ± 1.3 | 0.948 | 72.1 ± 2.8 | 0.898 | 49.7 ± 1.5 | 0.874 | 87.7 ± 0.2 | 0.867 | 72.3 ± 0.0 | | PR GIN(1234) | 1.689 | 75.6 ± 0.3 | 0.981 | 74.7 ± 0.2 | 0.710 | 71.5 ± 0.5 | 0.780 | 51.2 ± 0.9 | 0.929 | 90.7 ± 2.1 | 0.667 | 72.5 ± 2.2 | | 2-Hop(3) | 1.357 | 76.8 ± 0.8 | 0.608 | 73.6 ± 0.9 | 0.410 | 71.0 ± 0.7 | 0.415 | 50.1 ± 1.5 | 0.910 | 91.0 ± 0.0 | 0.394 | 69.5 ± 1.3 | | PR 2-Hop(3) | 1.180 | 75.1 ± 1.1 | 0.528 | 76.5 ± 1.6 | 0.357 | 71.5 ± 0.5 | 0.361 | 52.5 ± 0.7 | 0.929 | 91.3 ± 1.5 | 0.342 | 73.3 ± 0.3 | | 2-Hop(5) | 1.927 | 74.2 ± 0.5 | 0.880 | 70.6 ± 1.7 | 0.680 | 68.8 ± 0.8 | 0.628 | 49.5 ± 0.6 | 0.894 | 88.3 ± 1.0 | 0.621 | 71.0 ± 0.8 | | PR 2-Hop(5) | 1.606 | 74.5 ± 0.4 | 0.734 | 71.1 ± 1.5 | 0.566 | 69.7 ± 0.2 | 0.523 | 48.0 ± 2.2 | 0.871 | 88.7 ± 1.5 | 0.517 | 72.0 ± 0.3 | | 2-Path(3) | 1.385 | 75.6 ± 0.0 | 0.620 | 73.0 ± 0.8 | 0.418 | 71.4 ± 0.1 | 0.423 | 49.5 ± 1.8 | 0.9045 | 88.7 ± 1.7 | 0.402 | 72.7 ± 2.0 | | PR 2-Path(3) | 1.385 | 76.1 ± 0.3 | 0.632 | 75.5 ± 0.4 | 0.488 | 74.2 ± 1.9 | 0.451 | 51.6 ± 0.4 | 0.915 | 91.6 ± 0.0 | 0.446 | 76.9 ± 0.7 | Q2:The experimental setup evaluates pruned GNNs on several benchmark datasets. However, the improvements in efficiency are marginal, and the comparisons do not convincingly demonstrate practical benefits over existing optimized GNN models. I do not think the TUdatasets are enough to evaluate the method. The authors should consider OGB datasets for evaluations. After we made appropriate improvements to the models to make them suitable for large-scale graphs, we conducted large-scale graph experiments on dataset ogb-arvix for GIN, Pruned GIN, and Pruned multiple aggregation GIN, the results show that while maintaining comparable accuracy to the original model, the pruned version achieves significant reductions in both parameter count and training duration, showing noticeable efficiency improvements. Some of the results are shown below. |Model|Test Accuracy|Val Accuracy|Parameter |Total time| |----------|----------|----------|----------|----------| |GIN|0.7012 $\pm$ 0.0114|0.7132$\pm$0.0023|2.29M |2.23h| |Pruned GIN |0.6905 $\pm$ 0.0241|0.7231$\pm$0.0041|0.98M |1.52h|| |Pruned 2-Mul GIN|0.7121 $\pm$ 0.0092|0.7341$\pm$0.0066|1.14M |1.73h| And all the experimental materials(code) and results in provided in https://anonymous.4open.science/r/PrunedGNN-AC61/README.md
null
null
null
null
null
null
null
null
Parameter-Efficient Fine-Tuning of State Space Models
Accept (poster)
Summary: This paper studies the fine-tuning of state-space models, in particular, S4 and S6. Empirical studies on fine-tuning the encoder and the decoder using many different existing tuning mechanisms are shown. Then, an SDT-P fine-tuning strategy of the autoregressive modules is proposed based on pruning and then sparsely fine-tuning a set of latent states. It is also empirically shown that the pruning stage can be pruned. Experiments show that the SDT method works better than the traditional LoRA strategy. ## update after rebuttal I have raised my score to weak accept to reflect my current position. Claims And Evidence: I find the claims about the empirical advantages of SDT well-supported by experiments. Methods And Evaluation Criteria: I find the proposed methods and evaluation criteria appropriate for the problem at hand. Theoretical Claims: I did not check the correctness of the proofs. In fact, I am not satisfied with the presentation of some theoretical statements, which I will clarify in the comments/questions sections. Experimental Designs Or Analyses: I checked the soundness of the experimental designs and found them appropriate. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper analyzes the fine-tuning of a promising class of large language models, which is certainly an important problem. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: * The problem studied in this paper is important and opportune. * The empirical study is comprehensive and convincing. Weaknesses: * While the central ideas of the paper are clear, I do have some comments on the presentation of the material. See the section below. * The theoretical statements are on the rough side and many statements need rephrasing. Please see the section below. Other Comments Or Suggestions: The comments here are mainly about the presentation of the material and the delivery of the theoretical statements. My current evaluation of "weak reject" is mainly due to the presentation issues. I can guarantee that once the author(s) put efforts into addressing the following comments, pushing up my evaluation would be an easy reach. 1. Right now, the fine-tuning strategies studied and compared in this paper are only referred to without any other details provided. While some of them are popular, readers who are not experts on fine-tuning may find it hard to parse all of them, and consequently Table 1, all at once. It would be useful to add a brief and mathematical introduction to some of them. An introduction in the supplementary material would be otherwise useful. 2. Lemma 2 and Theorem 1 need rephrasing to be mathematically correct. In particular, consider the following questions: 1. In Lemma 2, the quantity in (5) is just a number and does not depend on the initial model $f_0$. This does not seem to correctly reflect the definition of (5). 2. In Lemma 2, when saying "the minimum number of tunable parameters," do you mean the precise minimum number (so that we know for sure that if the number goes below it, then we'll not be able to do the job) or an upper bound of the minimum number? 3. In Theorem 1, $L^*$ and $H^*$ are not defined. I think they come from the target model. In that case, it needs to be clearly stated what the target model is. 4. In Theorem 1, when saying "selectively fine-tuning," what is the training algorithm being used? Or is it just an algorithm-free statement that considers the "best" way of modifying the tunable parameters? 5. In Theorem 1, what does "accurately represent" mean? 3. Some minor comments: 1. Technically, an S4 model refers to a model where $\mathbf{A}$ is the sum of a diagonal matrix plus a rank-one matrix, whereas an S4D model is the one with a diagonal $\mathbf{A}$. The author(s) should be more precise. 2. Throughout the manuscript $\otimes$ is used for entrywise product. This is a very misleading notation as it is often used for the Kronecker product. Consider using $\odot$ or $\circ$ for the Hadamard product instead. 3. On line 293-294, please be more precise about what "up to the same permutation" means. Questions For Authors: 1. Lemma 1 is about the expressiveness of different ways of doing the fine-tuning. It does not consider, for example, the stability of representations, or more naively, how easy it is to reach the target by only fine-tuning the encoder. Can the author(s) comment on this? 2. Lemma 1 seems to diminish the significance of the proposed method, as it says that fine-tuning can be done without changing the autoregressive units. This may be related to the first point, but can the authors provide more clarification on this in the manuscript so that the transition from section 4 to 5 is smoother? 3. On line 279, the author(s) wrote "assume all hidden dimensions are active." What does that mean precisely? 4. What is the efficiency of the proposed method compared to, for example, LoRA? For example, Figure 2 is mainly a comparison based on the same number of tunable parameters. What about a comparison based on the fine-tuning time? Also, how do these change as the sequence lengths grow? 5. How are $\alpha$ and $\beta$ in Algorithm 1 selected in Table 3 and 4 and why is it a fair comparison to LoRA with whatever the hyperparameters you selected there? These are necessary to be discussed in the main manuscript. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We've addressed all your concerns below. --- > Q1: Lemma 2 Clarifications > * (i) Use $\odot$ or $\circ$ for Hadamard product? > * (ii) Clarify "same permutation"? > * (iii) (5) seems independent of $f_0$—is this correct? > * (iv) What does "all hidden dimensions are active" mean? > * (v) Is it the exact or an upper bound on tunable parameters? * (i) Fixed. * (ii) "Up to the same permutation" means that the same permutation matrix $P$ is applied to all three parameters of the initial model $f_0$. This yields a model that is functionally equivalent but with permuted hidden dimensions, giving $$\Theta_0= \\{(P^\top \overline{A}_0 P, P^\top \overline{B}_0, C_0 P) : P \text{ is a permutation matrix}\\}.$$ Here, $\overline{A}_0$, $\overline{B}_0$, and $C_0$ are the parameters of $f_0$ before fine-tuning. * (iii) The quantity in (5) does, in fact, depend on the initial model $f_0$ through the constraint set $\Theta_0$. To prevent confusion, we rephrased Lemma 2 to highlight this constraint set. * (iv) It means that (for target model $f_\star$), all hidden dimensions are non-zero. * (v) When the assumption (iv) holds, it refers to the exact minimum. **`Preview`** Revised Lemma 2 with improved clarity (https://anonymous.4open.science/r/ce7/l2.png). Notations are from Sec. 5.1. > Q2: Theorem 1 > * (i) Define $L^\star$ and $H^\star$—are they from the target model? > * (ii) Does "selectively fine-tuning" imply a specific algorithm? > * (iii) Clarify "accurately represent." * (i) $L^\star$ and $H^\star$ are indeed the number of layers and the hidden dimension of the target model $f_\star.$ We have updated statement for a better introduction of the target model and the corresponding notations. * (ii) The statement focuses solely on model expressiveness, independent of training algorithms. * (iii) It means "functionally equivalent," i.e., the fine-tuned model $f$ satisfies $f(x) = f_\star (x)$ for all input sequences $x$. **`Preview`** Revised version of Theorem 1 with improved clarity (https://anonymous.4open.science/r/ce7/t1.png). > Q3: Lemma 1 > * (i) Lemma 1 covers only expressiveness, not optimization—please comment. > * (ii) Lemma 1 suggests fine-tuning linear projections alone is sufficient; does this diminish your method’s significance? * (i) **`New Exp.`** Lemma 1 analyzes only theoretical expressiveness. Empirical results (https://anonymous.4open.science/r/ce7/f1.png) confirm fine-tuning $W_{\text{in}}$ alone matches tuning $W_B$, $W_C$, $W_{\Delta, \uparrow}$ across three GLUE tasks. * (ii) Great point. To clarify, in Lemma 1, $A$ and $W_{\Delta,\downarrow}$ are fixed. While $W_{\text{in}}$'s expressiveness includes that of $W_B$, $W_C$, and $W_{\Delta,\uparrow}$, it does not cover the expressiveness of $A$, which is essential for seq2seq operations. Thus, $W_{\text{in}}$ offers a portion of expressivity for SSM, yet it alone remains insufficient to attain optimal performance. **`Preview`**  Updated Lemma 1 to emphasize fixed $A$ (https://anonymous.4open.science/r/ce7/l1.png) and will further smooth the Sec. 4–5 transition. > Q4: Be clear whether the model is S4 or S4D. We specifically use S4D (diagonal $A$) and will clarify this in Sec. 1, 3, and 5. > Q5: Fine-tuning strategies lack mathematical details. Thanks. We'll revise Sec. 3 ("Preliminaries") to include mathematical details on fine-tuning methods. > Q6: What is the fine-tuning time compared to LoRA? **`New Exp.`** Based on your feedback, we conducted two additional experiments: (i: https://anonymous.4open.science/r/ce7/f2.png) a performance comparison between SDT and LoRA under varying time budgets in a synthetic setting; and (ii: https://anonymous.4open.science/r/ce7/f3.png) a runtime analysis of SDT and LoRA for training on a single batch using a pretrained model, across different sequence lengths. We observe that our method is slightly more efficient than LoRA, particularly as the sequence length increases, because LoRA introduces additional matrix multiplications, while SDT does not. > Q7: How were $\alpha$, $\beta$ in Table 3/4 chosen, and why is it a fair comparison to LoRA? **`New Exp`** We fix $\beta = 0.99$ and sweep $\alpha \in \\{0.75, 0.90, 0.95\\}$. To ensure a fair comparison with LoRA, we use similar parameter budgets and hyperparameter sets of the same size for both methods: 45 configurations per method (15 learning rates × 3 method-specific settings). For LoRA, we additionally compare the three chosen configurations against alternative settings with similar parameter budgets (https://anonymous.4open.science/r/ce7/f4.png), showing that our selected configurations perform reasonably well. --- **Final Note:** In addition, we’ll summarize all notations in the appendix. Since you said raising your score would be “an easy reach” once issues were addressed, we hope our response meet that bar. We’d appreciate an updated score and are happy to clarify anything further.
Summary: This paper investigates the performance of popular parameter-efficient fine-tuning methods (PEFT) (e.g., LoRA and its variants) when applied to SSMs like Mamba and hybrid models such as Jamba. It finds that LoRA-based methods consistently outperform other PEFT approaches, especially when applied to linear projection matrices, but fail to improve performance significantly when directly applied to SSM modules. To address this, the authors propose Sparse Dimension Tuning (SDT), a specialized PEFT method designed explicitly for fine-tuning SSM modules by selectively training specific channel and state dimensions based on theoretical insights. Extensive experiments demonstrate that combining SDT for SSM modules with LoRA for linear projections achieves state-of-the-art performance across diverse tasks, including natural language understanding, generation, and computer vision benchmarks, confirming SDT's effectiveness and efficiency compared to existing methods. Claims And Evidence: Yes. The claims are backed by extensive experiments and theoretical analysis. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The paper introduces SDT specifically to address the limitations of existing PEFT methods on SSM-based models, and it benchmarks these methods using a diverse set of tasks and datasets—including GLUE, DART, SAMSum, Spider, CIFAR-10, and CelebA—which are well-recognized for assessing performance in both language and vision applications. Theoretical Claims: The proofs for the main theoretical claims, e.g., Lemma 1, Lemma 2, and Theorem 1 and their logical structures and derivations appear sound under the stated assumptions. Experimental Designs Or Analyses: The authors benchmark PEFT methods on both synthetic datasets (using deep S4 models) and real-world tasks spanning GLUE, DART, SAMSum, Spider, CIFAR-10, and CelebA with SSM-based (Mamba) and hybrid (Jamba) architectures; while these experiments are well-controlled with careful hyperparameter tuning and fair parameter budget comparisons, a minor issue arise: - The focus on SSM-based and hybrid models limits the generalizability of the findings, suggesting that broader architectural testing could further strengthen the evidence. Supplementary Material: I examined the extended related works on SSMs and PEFT, where Section C provided additional details on the experimental setup and extended benchmarking results, Section D offered in-depth proofs and theoretical analyses, and Section E presented expanded experimental evaluations on deep S4 models and additional results for Mamba and Jamba. Relation To Broader Scientific Literature: The paper’s contributions are well-integrated with the broader scientific literature by extending established PEFT methods—such as prompt tuning, prefix-tuning, BitFit, and especially low-rank adaptations like LoRA—to the realm of SSMs, which have been previously developed for efficient long-sequence modeling. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - Creative integration of established PEFT techniques like LoRA with a novel Sparse Dimension Tuning (SDT) method tailored for SSM modules. - Advances the state-of-the-art in fine-tuning SSM-based and hybrid models, which is valuable for efficient language modeling. Weakness: - Limited evaluation on a broader range of model architectures beyond SSM-based and hybrid models, which may affect the generalizability of the findings. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback, particularly for recognizing that (i) our method is creative and rational, (ii) our claims are supported by extensive experiments and theoretical analysis, (iii) our theoretical claims are sound, (iv) we advance SOTA results in fine-tuning SSM-based and hybrid models, and (v) our contributions are well-integrated with the broader scientific literature. --- > Q: Limited applicability beyond SSM-based and hybrid models. Our proposed SDT algorithm is intentionally tailored for SSMs, with our theoretical analysis and experiments specifically focused on this model class. Regarding your concern, while applying SDT to pure Transformers is beyond our current scope, we demonstrate its broader potential through experiments on Mamba-II, whose SSD module closely resembles attention [1]. When applied to SSD modules with LoRA on linear projection matrices, SDT consistently outperforms LoRA alone on Mamba-II across diverse tasks (Sec E.2). Furthermore, we believe the theoretical insights and proof techniques from our analysis could prove valuable for developing PEFT methods for other architectures. --- *References:* [1] Dao, T. and Gu, A. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In International Conference on Machine Learning, 2024. **Final Note:** Thank you again for your valuable comments. We are grateful that you appreciate our paper’s contributions in theory, methodology, and experimentation. If you have any remaining questions, please do not hesitate to let us know. Assuming our responses have addressed your concerns satisfactorily, we kindly ask you to consider raising your score and supporting our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I will keep my current score. --- Reply to Comment 1.1.1: Comment: We appreciate your engagement during the discussion, and thank you for supporting the acceptance of our paper.
Summary: This paper investigated the PEFT for SSM, like MAMBA. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes, the experimental parts Relation To Broader Scientific Literature: it mainly focus on PEFT for SSM. Essential References Not Discussed: NO Other Strengths And Weaknesses: Strengths: 1. The paper is well written, and there are many PEFT studies for Transformer-based LLM, not many PEFT for SSM. 2. The paper contains many experiemntal result to support their claim in the supplementary materials. Weaknesses: 1. The proposed PEFT is incremental, such idea is mainly from the PEFT for transformer-based LLM. 2. The theoretical analysis is mainly for S4 or S6, how about S5 and Mamba2? Other Comments Or Suggestions: No Questions For Authors: 1. The proposed PEFT is incremental, such idea is mainly from the PEFT for transformer-based LLM. 2. The theoretical analysis is mainly for S4 or S6, how about S5 and Mamba2? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that (i) our paper is well written, (ii) it tackles the gap where PEFT studies favor Transformer over SSM, and (iii) it contains many experimental results to support the claims. --- > Q1: The proposed PEFT is incremental, such idea is mainly from the PEFT for transformer-based LLM. We respectfully disagree. SDT is not just an incremental tweak of PEFT for Transformer-based LLMs—it’s built on a fresh theoretical analysis tailored for SSMs’ unique parameter structure. Beyond that, our paper delivers substantial contributions that stand firm, even if the method is viewed as incremental by the reviewer: * (i) **(One of the First)** A systematic benchmark of PEFT methods on SSM models, * (ii) **(First)** A theoretical analysis of PEFT in the SSM setting, and * (iii) **(One of the First)** A new PEFT method designed specifically for SSMs, supported by both theory and strong empirical results. Together, these contributions offer new insights into PEFT beyond what has been explored in Transformer-based LLMs. > Q2: The theoretical analysis is mainly for S4 or S6, how about S5 and Mamba2? * **`New Theoretical Results`** **S5**: Although our original analysis primarily focused on S4 and S6, based on your comment, we have extended our theoretical results (Lemma 2 and Theorem 1) to include S5. Specifically, Lemma 2, which characterizes the minimal number of parameters required to update a frozen S4 model for functional equivalence to a target S4, has been extended to S5 with similar conclusions, accounting for multi-channel handling. Additionally, Theorem 1, regarding the expressive power of SDT-P with LoRA on simplified SSM-based models, also holds true for S5. We provide the detailed extension for S5 here: https://anonymous.4open.science/r/ce7/l9.png. * **Mamba-II**: Although extending our theoretical analysis to Mamba-II is non-trivial, we successfully adapt our method to this architecture, as detailed in Sec. C.2 and E.2, and evaluate it on diverse tasks, including GLUE, DART, SAMSum, and Spider. Our method consistently outperforms LoRA alone on Mamba-II, highlighting its generalizability beyond the architectures explicitly covered in our theoretical analysis. We will acknowledge this theoretical limitation explicitly in the conclusion section, emphasizing that extending our analysis to Mamba-II is an important direction for future work. --- **Final Note:** Thank you for sharing your concerns. While we understand your perspective, we kindly remind you of the substantial contributions our paper makes and the efforts we've undertaken to address your comments. We would greatly appreciate it if you could reconsider your evaluation score and support the acceptance of our paper.
Summary: This paper investigates how PEFT methods perform on State Space Models (SSMs) (e.g. the Mamba architecture) and identifies which model components are best to target. It provides a comprehensive benchmark of existing PEFT techniques on SSM-based language models and hybrid architectures with Jamba. A key finding is that LoRA and its variants consistently outperform other PEFT approaches on SSM models​. However, LoRA’s benefit comes mainly from fine-tuning the linear projection matrices (e.g. input/output projections) – applying LoRA to the SSM-specific components does not yield additional gains​. In fact, prompt-based tuning methods (like prefix or prompt tuning) are found largely ineffective on SSMs, essentially only adjusting the initial hidden state (a severely limited form of fine-tuning, as proven in the paper’s Proposition 1)​. These insights highlighted a gap: none of the standard PEFT methods are well-suited for the internal SSM parameters. To address this, the authors propose a new method called Sparse Dimension Tuning (SDT) – a PEFT strategy tailored for SSM layers. SDT works by selectively fine-tuning only a subset of the state-space channels and state dimensions, while freezing or pruning the rest. When combining SDT (for SSM layers) with LoRA (for the linear projection layers), the paper achieves state-of-the-art fine-tuning performance on multiple benchmarks​. The approach matches or surpasses the best existing methods while training only a small fraction of model parameters, validating the effectiveness of the proposed methodology across extensive experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Appendix E. Relation To Broader Scientific Literature: It is related to Parameter Efficient Fine Tuning, State Space Models and Hybrid State Space Models. No previous works on systematic studies of PEFT on SSMs. Essential References Not Discussed: No. Other Strengths And Weaknesses: Stength: 1. This is a well executed paper that systematically studied PEFT for the emerging neural architectures, including Mamba, Mamba 2 and its hybrid variant. The empirical results are strong and comprehensive. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are delighted that the reviewer likes the paper, recognizing it as (i) well-executed, (ii) a systematic study of PEFT for emerging neural architectures (Mamba, Mamba 2, and hybrid variants), and (iii) providing strong and comprehensive empirical results. Thank you for your encouragement!
null
null
null
null
null
null
EduLLM: Leveraging Large Language Models and Framelet-Based Signed Hypergraph Neural Networks for Student Performance Prediction
Accept (poster)
Summary: The paper introduces EduLLM, a new framework for student performance prediction that combines Large Language Models (LLMs) with a Framelet-based Signed Hypergraph Neural Network (FraS-HNN). FraS-HNN is a novel approach for signed hypergraph learning, utilizing high-pass and low-pass filters to extract multi-frequency interactions between students and questions. LLMs are used to enhance semantic representation, complementing hypergraph structural learning to improve predictive accuracy. The LLM-based semantic feature extraction is relatively standard, with limited discussion on LLM selection, fine-tuning, or task adaptation. It appears to mainly convert text into embeddings and integrate them with the hypergraph structure. Claims And Evidence: Claims 1. EduLLM improves student performance prediction by integrating LLMs with hypergraph neural networks. 2. FraS-HNN effectively models signed hypergraphs, capturing both structural and semantic information to enhance prediction performance. 3. EduLLM outperforms existing state-of-the-art (SOTA) methods across multiple educational datasets. Claims 1 and 2 are supported by experiments but lack a detailed discussion on optimizing the LLM component. LLMs are primarily used for feature extraction without further fine-tuning or analysis. Claim 3 shows strong results on five datasets, but th comparison lacks larger-scale datasets and does not evaluate computational complexity. The theoretical analysis of FraS-HNN is complex but lacks an intuitive explanation of its advantages over existing methods. Methods And Evaluation Criteria: 1. The paper models student-question interactions using signed hypergraphs and integrates LLM-enhanced semantic features, which provides a degree of novelty. 2. FraS-HNN is mathematically analyzed, including its framelet-based hypergraph filtering approach. Theoretical Claims: Theoretical results demonstrate multi-frequency signal analysis and the design of high-pass/low-pass filters.However, comparative experiments do not show a clear advantage of FraS-HNN over traditional hypergraph GNNs (e.g., HyperGCN, HCHA). Experimental Designs Or Analyses: The model's performance is evaluated on five educational datasets, showing that EduLLM consistently achieves higher F1-score and AUC than SOTA methods. Ablation studies analyze the contributions of high-pass filters, low-pass filters, and LLMs. Limitations: 1. Lack of large-scale dataset evaluation—current datasets have limited diversity and scale. 2. No computational complexity analysis—FraS-HNN's scalability remains unclear. Supplementary Material: Appendix B: Theoretical analysis of FraS-HNN, including mathematical proofs of framelet-based hypergraph learning. Appendix C: Details on dataset construction, explaining the conversion of MCQ data into signed hypergraphs. Appendix E: Additional experiments, including hyperparameter sensitivity analysis and robustness tests. Relation To Broader Scientific Literature: 1. EduLLM uses FraS-HNN for signed hypergraph learning, related to HyperGCN, HCHA, and UniGNN. 2. The work can be seen as an extension of KT tasks, related to models such as DKT, DKVMN, AKT, and SAINT. 3. The paper applies LLMs for semantic feature extraction, aligning with recent studies on ChatGPT in personalized learning. Essential References Not Discussed: 1. HyperGCN (Yadati et al., 2019): A strong baseline for hypergraph convolution, should be included in comparative experiments. 2. AKT (Ghosh et al., 2020): A state-of-the-art knowledge tracing model, relevant for student performance prediction, should be compared against EduLLM. 3. MOOC-related work (Piech et al., 2015; Nakagawa et al., 2019): Prior studies evaluating large-scale online learning datasets—EduLLM should be tested on similar datasets for practical validation. Other Strengths And Weaknesses: Strengths: 1. EduLLM achieves SOTA performance on the evaluated datasets. 2. Integrating LLMs with hypergraph neural networks introduces a new perspective for student modeling. Weaknesses: 1. The study only evaluates five small datasets, lacking tests on more challenging real-world datasets (e.g., MOOCs). 2. The paper does not discuss the scalability of FraS-HNN on large-scale datasets. Other Comments Or Suggestions: Discuss LLM selection and optimization—a comparison between different LLMs (e.g., GPT-4, LLaMA) would strengthen the contribution. Questions For Authors: How was the LLM chosen? Have different LLM architectures been considered? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and for acknowledging the novelty and strengths of our proposed framework. Please find our detailed responses below: - **LLM Selection, Fine-tuning, and Task Adaptation:** We appreciate the reviewer’s observation and agree that further analysis of LLM choices could be valuable in broader contexts. However, to clarify the scope of our work: LLMs in EduLLM are primarily used as a semantic information extraction tool to generate initial node features for questions in the student-question signed hypergraph, rather than being the core focus of technical innovation. **One reason** why we do not study in detail the effect of different LLM selections is that, as we would like to note, this submission is flagged under the **''Application-Driven Machine Learning''** category, where the focus is on solving a specific real-world task and not on advancing LLM development itself. **Another reason** is that we aimed to ensure fairness in the performance comparison with the baseline models, so we used the same LLM module and processing pipeline to generate the preprocessed semantic embeddings. This manner ensures that any observed improvements are due to the merits of the proposed framework, particularly the signed hypergraph setting and advantages of FraS-HNN, rather than differences in the LLM processing itself. Alternatively, to approximately assess the impact of LLM-based embeddings on model performance, we conducted a robustness study presented in **Section 4.7**. The results, shown in **Figure 3**, demonstrate that the model’s performance degrades smoothly and only slightly at higher noise levels. This robustness suggests that EduLLM is relatively insensitive to variations in the LLM-induced embeddings, implying that replacing the current LLM backbone with a different one is unlikely to significantly affect task performance. - **Large-scale Dataset:** We agree that evaluating on larger-scale datasets would strengthen the practical validation of EduLLM. Our current datasets are widely used and shared with baseline models, ensuring fairness and comparability. Meanwhile, our lab is constructing new large-scale datasets with MCQ texts from various subjects, which will allow us to further assess scalability and real-world applicability in future work. - **Theoretical Clarity and Intuition:** While detailed proofs are provided in the appendix, we agree that offering more intuitive explanations would help clarify the motivation and benefits of our approach. In a future updated version, we will provide more intuitive explanations of FraS-HNN’s advantages over existing methods, which we believe will also benefit researchers working on **(signed) hypergraph learning**. - **Comparisons with Existing Hypergraph and KT Models:** We would like to clarify that our problem formulation, predicting student performance at the question level using signed hypergraphs, is not entirely equivalent to the KT task, which (although relates to student performance prediction) typically focuses on modeling students' evolving mastery over concepts across time. We have added additional evaluations against representative hypergraph neural networks (HNNs), including **HGNN, HyperGCN, AllDeepSets, AllSetTransformer, ED-HNN, SheafHyperGNN**. Due to space constraints, the detailed results are provided in our response to **Reviewer stfm**, demonstrating clearly FraS-HNN's effectiveness in capturing **signed high-order interactions**. - **Computational Complexity and Scalability:** We provide a comparative analysis of training complexity for several representative HNNs and our proposed FraS-HNN, summarized in the table below. In our formulation, $k$ (the number of high-pass filters), $J$ (the scale level in FraS-HNN), and $K$ (the largest number of non-zero values in the framelet transform matrices) are constants independent of specific hypergraph structures. In practice, $k$ and $J$ are small, and due to the sparsity of hypergraph framelets, $K$ is typically small and often comparable to $\|H\|_0$. As a result, FraS-HNN's overall complexity is comparable to models such as AllDeepSets and ED-HNN, without introducing significant overhead. | Model | Computational Complexity | |---------------------|----------------------------------| | UniGCNII | $\mathcal{O}(TL(N+M+\|H\|_0)d + TLNd^2)$ | | Deep-HGCN | $\mathcal{O}(TLM'd + TLNd^2)$ | | AllDeepSets | $\mathcal{O}(TL\|H\|_0d + TL(N+M)d^2)$ | | ED-HNN | $\mathcal{O}(TL\|H\|_0d + TL(N+M)d^2)$ | | **FraS-HNN (ours)** | $\mathcal{O}(TL(kJ+1)Kd + TL(N+M)d^2)$ | Here, $N$ and $M$ denote the number of nodes and hyperedges, respectively; $\|H\|_0$ is the number of non-zero entries in the incidence matrix; $T$, $L$, and $d$ denote the number of epochs, layers, and feature dimensions; $M^{'}$ is the number of edges in the clique expansion (when transforming the hypergraph into a graph).
Summary: This paper presents EduLLM, a novel framework that integrates Large Language Models (LLMs) with a Framelet-based Signed Hypergraph Neural Network (FraS-HNN) to address student performance prediction in personalized education systems. EduLLM models the complex structural and semantic relationships between students and multiple-choice questions (MCQs) by constructing signed hypergraphs, where positive and negative hyperedges capture correct and incorrect responses. LLMs provide fine-grained semantic embeddings for educational content, which are combined with the multi-frequency features extracted by FraS-HNN using low-pass and high-pass filters. Through comprehensive experiments on five real-world educational datasets, EduLLM demonstrates significant improvements over strong baselines, highlighting its effectiveness in both capturing high-order relationships and integrating semantic information for performance prediction tasks. Claims And Evidence: Yes, the claims made for EduLLM in this submission are well-supported by comprehensive evidence, including both rigorous theoretical analysis and extensive empirical results. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-aligned with the problem of student performance prediction. Theoretical Claims: Yes, I have reviewed the provided theoretical analysis and proofs, particularly those related to the framelet-based signed hypergraph neural network (FraS-HNN). The mathematical formulations, including the construction of framelets, spectral filtering, and the tight frame properties, are clearly presented and appear logically sound. Experimental Designs Or Analyses: Yes, I have reviewed the experimental design and analyses. The experiments are well-structured, using five real-world educational datasets that are appropriate for the student performance prediction task. The evaluation includes comparisons against strong baselines, thorough ablation studies to assess the contributions of key components, parameter sensitivity analyses, and robustness tests for LLM-induced semantic representations. Overall, the experimental designs and analyses are comprehensive. Supplementary Material: Yes, I have reviewed the supplementary material. It provides detailed theoretical derivations, additional experimental results, and extended explanations that complement the main text. The supplementary content is consistent with the core methodology, offering clarity on the mathematical foundations of FraS-HNN and further supporting the empirical findings presented in the paper. Relation To Broader Scientific Literature: Technically, EduLLM advances the student performance prediction (which indeed is a classic educational data mining task) by incorporating LLM-generated embeddings to enrich the feature representations of questions, helping to model semantic nuances in student interactions. Also, as for the problem-solving, prior studies have applied signed graphs to model positive and negative relationships (e.g., correct/incorrect answers) in educational scenarios. However, traditional signed graphs are limited to pairwise relations. EduLLM extends this by introducing signed hypergraphs, which can capture higher-order interactions (such as multiple students answering the same question) and better reflect the group dynamics present in educational settings. EduLLM builds on this by applying framelet theory to signed hypergraphs, enabling the simultaneous extraction of global (low-pass) and discriminative (high-pass) features from complex educational interactions. I think this may potentially motivate more follow-up works on signed hypergraphs. Essential References Not Discussed: The paper cites a solid range of related works across knowledge tracing, signed graph learning, hypergraph neural networks, and educational data mining. The literature review is up to date. Other Strengths And Weaknesses: S1: [New Problem Formulation] The paper introduces a well-defined and meaningful problem setting by modeling student performance prediction with signed hypergraphs to capture both correct and incorrect interactions in a high-order structure. S2: [Novel Framework] The proposed EduLLM framework uniquely combines LLM-based semantic embeddings with a framelet-based signed hypergraph neural network, offering an innovative solution to educational prediction tasks. S3: [Comprehensive Theoretical and Empirical Studies] The work provides both solid theoretical foundations and extensive empirical validation through detailed experiments, ablation studies and evaluation on multiple real-world datasets. W1: The application and evaluation are focused on educational data, with limited discussion on how the framework could generalize to other domains or tasks beyond student performance prediction. Any further insights? W2: Although I am familiar with framelets and wavelets, I am curious whether the current framework could be further enhanced by exploring alternative wavelet designs on hypergraphs. Specifically, would incorporating different types of wavelets lead to improved feature extraction or better adapt to various hypergraph structures? Other Comments Or Suggestions: See the above weaknesses Questions For Authors: Q1: I am curious whether framelets can be designed specifically for directed and signed hypergraphs (I think this is a brand-new definition, to the best of my knowledge). Although this is beyond the scope of the current work, it would be interesting to understand the key challenges involved. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging feedback on our work. We are pleased that you recognized **the novelty and technical contributions of EduLLM, including its theoretical soundness, methodological innovation, and comprehensive experimental validation**. For your key concerns, please find our responses below: - **Generalization Beyond Educational Domains & Alternative Wavelet Designs:** Thank you for noting these issues. For detailed clarifications regarding the potential generalization of FraS-HNN beyond educational applications and the discussion on exploring alternative wavelet designs, please kindly refer to our responses to **Reviewer Lfh6** (who is also curious about this). In addition, we would like to emphasize that, as flagged in our submission type, this work is submitted under the **Application-Driven ML Submissions** category. Our particular focus is specifically on the student performance prediction task, a well-recognized and meaningful problem in the educational domain. The proposed EduLLM framework is newly developed for solving this specific problem. Therefore, we did not engage in studying broader generalization across domains, as that falls outside the intended scope of our application-driven submission. For more context, please refer to the **Supplementary Guidelines for Reviewing Application-Driven ML Submissions** (see https://icml.cc/Conferences/2025/ReviewerInstructions). Hopefully, this clarifies the position of our work within the specific context of this year’s ICML submission categories.. - **Framelets for Directed and Signed Hypergraphs:** We appreciate the reviewer’s insightful question regarding the extension of framelet theory to directed and signed hypergraphs. Although there is very recent work defining the notion of **directed hypergraphs** (see: [https://openreview.net/forum?id=h48Ri6pmvi](https://openreview.net/forum?id=h48Ri6pmvi)), to the best of our knowledge, a formal and unified definition of **directed and signed hypergraphs** has not yet appeared in the literature. Developing framelet transforms for such structures would require extending spectral theory to handle both edge directionality and polarity, potentially involving non-symmetric Laplacians or incidence-based spectral operators. Key challenges include defining appropriate inner product spaces, preserving desirable mathematical properties (e.g., tightness and localization), and ensuring the interpretability of the resulting representations. While this extension is beyond the scope of the current work, we view it as an important direction for future investigation. Of course, we hope that our current work, particularly the key module FraS-HNN, **can motivate further follow-up studies on directed and signed hypergraphs, whether through model development or advanced applications**. Again, we sincerely thank you for recognizing the merits of our proposed FraS-HNN module (also recognized by **Reviewer Lfh6**), which we believe contributes not only to the student performance prediction task **but also holds broader value for advancing (signed) hypergraph learning in general**. In response to **Reviewer stfm's** suggestions, we have conducted additional empirical studies that include comparisons with several representative hypergraph neural network (HNN) baselines, i.e. HGNN (AAAI, 2019), HyperGCN (NeurIPS, 2019), AllDeepSets (ICLR, 2022), AllSetTransformer (ICLR, 2022), ED-HNN (ICLR, 2023), and SheafHyperGNN (NeurIPS, 2023), by replacing the FraS-HNN backbone of EduLLM with each of these modules. Please refer to our response to Reviewer stfm for further details. These supplementary results further validate the effectiveness of the high-pass and low-pass filters in FraS-HNN, as positively noted in your comments. For future work, we plan to construct more challenging **signed hypergraph datasets beyond the educational domain (e.g., in social networks, biology, traffic systems)** to support the development of this emerging direction in terms of theoretical understanding, model design, and real-world applications. We believe our framework, along with the core FraS-HNN module, can benefit other complex tasks and applications where signed hypergraphs (or their variants) align well with the problem formulation. We hope the above responses have clarified your questions and comments. We welcome any further discussion or suggestions.
Summary: - This paper presents EduLLM, a novel method for predicting student performance by integrating LLM-based semantic understanding with structural modeling via a framelet-based signed hypergraph neural network (FraS-HNN). - Signed hypergraphs capture higher-order interactions and differentiate correct from incorrect student responses, while LLMs enhance the semantic representation of questions. - EduLLM leverages framelet transforms to extract both low- and high-frequency information from complex educational data. Claims And Evidence: - The claims in the submission are well-supported. Methods And Evaluation Criteria: - EduLLM and evaluation criteria are well-suited for the student performance prediction, - and benchmark datasets effectively show its effectiveness. Theoretical Claims: - The theoretical proofs, formulations, and derivations are correct; however, the authors need to address some typos. Experimental Designs Or Analyses: - The use of diverse educational datasets, along with detailed comparisons and ablation analyses, provides solid evidence supporting the effectiveness. Supplementary Material: - The supplementary content effectively clarifies the theoretical foundations of the proposed model, reinforcing the rationale behind the idea and providing additional evidence to support the main claims of the paper. Relation To Broader Scientific Literature: - The key contributions closely aligned with ongoing research in educational data mining, hypergraph learning, and the application of large language models. - FraS-HNN introduces a novel method for hypergraph learning, which can be considered a model-level contribution, not confined to a specific application. - FraS-HNN has the potential to inspire further advancements in hypergraph neural networks, hypergraph learning, and related applications. Essential References Not Discussed: - This paper has appropriately cited relevant sources. Other Strengths And Weaknesses: - Strengths - The paper redefines student performance prediction as a hypergraph learning, which effectively model both correct and incorrect student responses through positive and negative edges. - FraS-HNN applies multiscale signal processing with low-pass and high-pass filters to capture both shared patterns and individual differences within student interactions, enhancing the representation of complex educational relationships. - EduLLM successfully combines hypergraph structural learning with semantic features from LLMs, leading to a more comprehensive understanding of student-content interactions and delivering improved prediction performance across multiple datasets. - The theoretical analysis of FraS-HNN is sound. - Weaknesses: - While the student performance prediction task is well-suited to the signed hypergraph learning formulation, a novel and insightful perspective proposed by the authors, EduLLM demonstrates strong performance on educational datasets. - However, the paper offers limited discussion on its potential generalization to other domains beyond student performance prediction. - The motivation for selecting the Haar-type filter in the framelet construction is not clearly explained. - The authors should elaborate on why this filter was chosen and whether other filter types were considered. - Additionally, could alternative filters offer advantages in capturing different patterns within the signed hypergraph? Other Comments Or Suggestions: - To better highlight the advantages of signed hypergraph modeling, it would be helpful to include visual examples that compare the structural differences between traditional signed graphs and signed hypergraphs using real student-question interaction data. Questions For Authors: - The authors summarize the key insights of these properties in the main text. - I strongly suggest that the authors enhance the clarity of their Abstract by clearly stating their motivation and key insights. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful evaluation. We are pleased that the novelty and effectiveness of EduLLM and FraS-HNN are recognized, along with the **theoretical soundness, strong empirical results, and broader contributions to hypergraph learning and educational data mining**. Below we provide point-by-point clarifications regarding the concerns raised: - **Generalization Beyond Educational Domains:** We agree that it is important to consider the applicability of FraS-HNN beyond the student performance prediction task. Although the current work is tailored for modeling educational data, **the proposed signed hypergraph formulation and framelet-based representation learning are inherently general and applicable to domains involving higher-order and polarity-sensitive interactions**. For instance, tasks such as signed social network modeling, sentiment-based recommendation, or misinformation spread detection can similarly benefit from the ability to distinguish positive and negative multi-way relations. The design of FraS-HNN does not rely on domain-specific assumptions, which supports its potential transferability. - **Motivation for Haar-type Filter:** Mathematically, the choice of the Haar-type filter is based on its efficiency, orthogonality, and suitability for decomposing signals into coarse (low-frequency) and detail (high-frequency) components. In the context of signed hypergraphs, this allows the model to simultaneously capture common learning behaviors and student-specific deviations. While other filters (such as Daubechies or spline-based filters) may offer smoother basis functions or better frequency localization, the Haar-type was selected as a starting point due to its simplicity and proven utility in prior work on multiscale graph signal processing. Exploring alternative filters remains a promising direction for future extension. - **Visual Comparison with Signed Graphs:** We appreciate the suggestion to better highlight the advantages of signed hypergraph modeling. To support this, we will provide illustrative examples showing how signed hypergraphs can represent multi-way interactions (e.g., groups of students responding to related questions) with polarity, in contrast to signed graphs that are limited to pairwise relations. - **Further Insights and Clarification for the Theoretical Properties:** The intention behind our theoretical analysis is to reveal how **the proposed FraS-HNN effectively captures both low-pass and high-pass components of node signals in signed hypergraphs**, enabling the model to differentiate shared and individualized patterns within complex educational interactions. To make these insights more accessible to the readers, we will summarize the key takeaways in the main text (in the updated version), such as: (1) how the framelet transform enables multiscale analysis on signed hypergraphs, (2) the specific role of positive and negative hyperedges in modulating spectral responses, and (3) how this facilitates richer representation learning compared to traditional graph-based or unsigned hypergraph approaches. Again, we sincerely thank you for recognizing the merits of our proposed FraS-HNN module, which we believe contributes not only to the student performance prediction task **but also holds broader value for advancing (signed) hypergraph learning in general**. In response to **Reviewer stfm's** suggestions, we have conducted additional empirical studies that include comparisons with several representative hypergraph neural network (HNN) baselines, i.e., HGNN (AAAI, 2019), HyperGCN (NeurIPS, 2019), AllDeepSets (ICLR, 2022), AllSetTransformer (ICLR, 2022), ED-HNN (ICLR, 2023), and SheafHyperGNN (NeurIPS, 2023), by replacing the FraS-HNN backbone of EduLLM with each of these modules. Please refer to our response to Reviewer stfm for further details. These supplementary results further validate the effectiveness of the high-pass and low-pass filters in FraS-HNN, as positively noted in your comments. For future work, **we plan to construct more challenging signed hypergraph datasets beyond the educational domain (e.g., in social networks, biology, traffic systems)** to support the development of this emerging direction in terms of theoretical understanding, model design, and real-world applications. We believe our framework, along with the core FraS-HNN module, can benefit other complex tasks and applications where signed hypergraphs (or their variants) align well with the problem formulation. We hope the above responses clarify the key concerns raised in your comments and questions. Please feel free to reach out with any further suggestions or points for discussion. We are happy to engage further to improve the clarity and impact of our work.
Summary: This paper introduces EduLLM, a framework that combines large language models (LLMs) with hypergraph learning to improve student performance prediction. Traditional methods mainly rely on historical response patterns but struggle to capture the complex interactions between students and learning content. To address this, EduLLM integrates FraS-HNN, a spectral-based model for signed hypergraph learning, where students and questions are represented as nodes, and response records are modeled as signed hyperedges to capture both structural and semantic relationships. FraS-HNN utilizes framelet-based filters to extract multi-frequency features, while EduLLM enhances predictions by incorporating fine-grained semantic features from LLMs. Experimental results on multiple datasets show that EduLLM outperforms existing approaches, demonstrating the effectiveness of combining LLMs with signed hypergraph learning. Claims And Evidence: Overall, the claims in this paper are fairly reasonable. Methods And Evaluation Criteria: Overall, the proposed method appears relatively simple, which limits its level of innovation. Additionally, some design motivation lack sufficient explanation regarding their rationale and justification. For example: 1. The introduction of LLMs in the method is solely for preprocessing raw text data—extracting keywords from multiple-choice question descriptions to construct a dictionary and using GloVe to learn representations as model inputs. However, I am curious why LLMs are considered an integral part of the model when they are merely used as a tool. There is no representation learning, fine-tuning, or specifically designed prompt engineering involved, and the approach relies on the simplest GloVe embeddings. Moreover, how is the stability of LLM outputs ensured during data processing, and how is their contribution controlled proportionally? 2. The use of signed hypergraphs seems reasonable, but what is the specific novel design of this module in this paper? In particular, what is the motivation behind the framelet-based signed hypergraph convolution? How does this design specifically cater to the task of student performance prediction? Theoretical Claims: The theoretical section of this paper primarily discusses the properties of signed hypergraph neural networks, but it is unclear how these properties relate to the student performance prediction scenario being modeled. Experimental Designs Or Analyses: I believe the experiments are quite insufficient. First, there are too few comparison methods. Why are only graph-based methods compared, and why are some common approaches in student performance prediction, such as cognitive diagnosis, not included? Additionally, since you are comparing graph-based methods, why not include some state-of-the-art graph representation learning methods, including hypergraphs and graph transformers? Supplementary Material: I have briefly reviewed the content in the appendix. Relation To Broader Scientific Literature: To be honest, the motivation of this paper is unclear. The authors state, "While effective, these methods often fall short in capturing the intricate interactions between students and learning content, as well as the subtle semantics of these interactions," which seems unreasonable. There has already been a substantial amount of research on cognitive diagnosis methods based on graph learning to explore the relationships between learners and learning elements. However, the authors do not mention or compare these existing approaches, which is quite surprising. Essential References Not Discussed: [1] Gao W, Liu Q, Huang Z, et al. RCD: Relation map driven cognitive diagnosis for intelligent education systems[C]//Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 2021: 501-510. [2] Shao P, Yang Y, Gao C, et al. Exploring Heterogeneity and Uncertainty for Graph-based Cognitive Diagnosis Models in Intelligent Education[J]. arXiv preprint arXiv:2403.05559, 2024. Other Strengths And Weaknesses: Overall, this paper needs improvement. At least in terms of its innovation and contribution to the specific field, it is limited, which is insufficient for ICML. Specifically, the paper has several notable weaknesses: 1. The writing lacks logical clarity, and the motivation is not adequately addressed. Many sections are overly redundant and verbose. 2. The innovation at the method level is limited, particularly in the special design and improvements related to student performance prediction that the authors emphasize. As I mentioned earlier. 3. The experiments are insufficient and lack necessary state-of-the-art methods in the field. 4. The descriptions and discussions of the experiments are limited, especially regarding the performance improvements of the model itself and the corresponding conclusions. Other Comments Or Suggestions: Refer to Strengths&Weaknesses. Questions For Authors: Refer to Strengths&Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for carefully reading our paper and providing detailed feedback. We respectfully offer the following clarifications: - **Further Clarification on Motivation:** While cognitive diagnosis models have explored learner-element relationships using graphs, our goal is to provide a new perspective by modeling **student-question interactions using signed hypergraphs**. Specifically, in the context of **student performance prediction**, where responses can be either correct or incorrect, we introduce **positive and negative hyperedges** to explicitly encode this polarity in group-level interactions. We believe this formulation brings a fresh and principled structural view to the problem, which forms the core of our motivation. - **Clarifying the Role of LLMs:** We would like to note that, this submission is flagged under the **Application-Driven Machine Learning** type, where the focus is on solving a specific real-world task rather than developing LLMs itself. In our framework, LLMs are used as a semantic feature extraction tool to generate question embeddings, serving as initial node features in the signed hypergraph. One reason we did not explore different LLM selection or prompt strategies is that we aimed to ensure fairness in performance comparison with the baseline models. We kept the use of the same LLM module and processing pipeline for generating the preprocessed semantic embeddings. This manner ensures that any observed improvements are due to the merits of the proposed framework, particularly the signed hypergraph setting and advantages of FraS-HNN, rather than differences in the LLM processing itself. For your concern about **stability** and **proportional contribution** of LLM-induced embeddings, we actually conducted a robustness test in **Section 4.7**, where additive Gaussian noise was introduced to the semantic embeddings to simulate perturbations. The results (in Figure 3) indicate that performance degrades smoothly and moderately as the noise level increases, suggesting that EduLLM is generally **robust to fluctuations in LLM outputs**. To a certain extent, this empirical evidence supports the **stability** of the model with respect to semantic input variations. In addition, the **advantage of using LLM-induced semantic embeddings**, as opposed to initial embeddings, has already been validated in related works such as SBCL and LLM-SBCL. - **Comparisons and Baseline Selection:** We would like to clarify that our problem formulation, i.e., modeling student-question interactions via **signed hypergraphs**, is fundamentally different from traditional **knowledge tracing** or **cognitive diagnosis** tasks. Specifically, cognitive diagnosis methods are designed around concept-level mastery modeling **over time** and typically require a fine-grained concept-question mapping. In contrast, our approach focuses on **question-level prediction** using **signed hyperedges** to represent correctness of responses, which is not directly supported by the data structures or assumptions of cognitive diagnosis datasets. Therefore, direct comparison would be misaligned and may lead to unfair conclusions. That said, we acknowledge the broader relevance of cognitive diagnosis research in educational modeling and will incorporate appropriate discussion in a future version. Additionally, based on your suggestion, we have included comparisons with hypergraph neural network (HNN) baselines, including **HGNN, HyperGCN, AllDeepSets, AllSetTransformer, ED-HNN, and SheafHyperGNN**, by replacing the FraS-HNN backbone of EduLLM with each of these HNN modules. As shown in the table below, EduLLM (with FraS-HNN) consistently outperforms the variants equipped with each HNN module across all datasets, demonstrating the effectiveness of FraS-HNN in modeling **signed high-order interactions** and its potential to benefit future research in **(signed) hypergraph learning**. | Model\Dataset | Sydney19351 | Sydney23146 | Biology | Cardiff20102 | Law | |--------------------|----------------|-----------------|-----------------|-----------------|-----------------| | HGNN | 0.606±0.014 | 0.619±0.013 | 0.673±0.006 | 0.624±0.007 | 0.905±0.011 | | HyperGCN | 0.620±0.012 | 0.650±0.038 | 0.651±0.016 | 0.625±0.023 | 0.901±0.032 | | AllDeepSets | 0.626±0.017 | 0.660±0.012 | 0.697±0.008 | 0.637±0.023 | 0.898±0.009 | | AllSetTransformer | 0.618±0.022 | 0.661±0.010 | 0.689±0.016 | 0.644±0.029 | 0.906±0.009 | | ED-HNN | 0.662±0.023 | 0.708±0.032 | 0.715±0.039 | 0.673±0.026 | 0.910±0.011 | | SheafHyperGNN | 0.684±0.023 | 0.711±0.030 | 0.732±0.022 | 0.687±0.025 | 0.914±0.013 | | **EduLLM (with FraS-HNN)**| **0.712±0.016** | **0.829±0.006** | **0.809±0.010** | **0.753±0.011** | **0.945±0.005** | --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response, but I believe several of my concerns remain unaddressed: * The core contribution claimed by the authors is the use of signed hypergraphs to model student-question interactions from a new perspective. **However, this approach has already been explored in recent work on student modeling [1,2,3], where signed hypergraphs are essentially a combination of existing techniques**. Therefore, the claimed novelty of this work is rather limited. * The authors argue that the problem definition in this paper is fundamentally different from traditional cognitive diagnosis because it is based on signed hypergraphs. I find this claim unconvincing. **The essence of the problem remains unchanged: as described in Section 2, the use of signed hypergraph structures to predict student-question responses still reduces to a binary prediction task. The optimization objective is not redefined, nor is a new task proposed** (e.g., transitioning from static classification to temporal forecasting). In fact, the so-called "student performance prediction" task in this work appears to be a degenerated form of cognitive diagnosis. Cognitive diagnosis aims to uncover interpretable student abilities grounded in educational psychology to achieve reliable prediction, whereas student performance prediction here is more of a black-box binary classification with neural networks, lacking interpretability. * **The paper fails to compare against state-of-the-art methods in student ability modeling, particularly those that involve hypergraphs or signed modeling. This omission is problematic and undermines the paper’s empirical rigor.** * Regarding the use of LLMs: while the authors emphasize that this work is an application of machine learning and that LLMs are merely tools, a substantial portion of the claimed contributions in the paper revolves around LLMs (e.g., the third listed contribution, which highlights the novelty of the proposed framework). The model is even named “EduLLM,” which seems inappropriate. **The authors have not made targeted adaptations or innovations that leverage the unique potential of LLMs within the specific educational scenario studied, and thus the technical novelty remains unconvincing.** * **The writing lacks clarity, and several sections suffer from redundancy and poor organization.** For example, the stated motivation — “these methods often fall short in capturing the intricate interactions between students and learning content, as well as the subtle semantics of these interactions” — is questionable. These aspects have been extensively studied in various student performance prediction and cognitive diagnosis works. The paper would benefit from a more thorough and contextually relevant discussion of related work in educational research. Of course, the paper also has its merits, as noted by the other reviewers who gave strong accept recommendations. **However, from the perspective of educational data mining, I believe the current version still has significant issues that need to be addressed before the paper can be considered for acceptance.** *** [1] Shen J, Qian H, Liu S, et al. Capturing Homogeneous Influence among Students: Hypergraph Cognitive Diagnosis for Intelligent Education Systems[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 2628-2639. [2] Shao P, Yang Y, Gao C, et al. Exploring Heterogeneity and Uncertainty for Graph-based Cognitive Diagnosis Models in Intelligent Education[J]. arXiv preprint arXiv:2403.05559, 2024. [3] Qian H, Liu S, Li M, et al. ORCDF: An Oversmoothing-Resistant Cognitive Diagnosis Framework for Student Learning in Online Education Systems[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 2455-2466. --- Reply to Comment 1.1.1: Comment: >- **[On the Novelty of Using Signed Hypergraphs]**: We thank the reviewer for pointing out these related works [1–3] that explore hypergraph-based approaches for student modeling. However, it is important to clarify that these works focus on hypergraphs **without incorporating signed information**, that is, they **do not model signed hyperedges** that explicitly distinguish between correct and incorrect student responses. In contrast, our work introduces **a new problem formulation based on signed hypergraphs for student performance prediction**, where polarity (correctness) is embedded directly into the structural representation. We would also like to highlight that our proposed FraS-HNN outperforms models that only use unsigned hypergraph structures, as demonstrated in the supplementary experiments (**see our first-round rebuttal responses, which we sincerely appreciate your engagement with**). This empirical advantage further supports the relevance and utility of our signed formulation for this task. >- **[Distinction from Cognitive Diagnosis Task]**: We agree that both student performance prediction and cognitive diagnosis aim to predict student outcomes, but we respectfully emphasize that the problem formulations are distinct. CD is typically grounded in interpretable latent traits, Q-matrices, or concept mappings, whereas **our formulation makes no such assumption**. Moreover, our used benchmark datasets for performance prediction **do not contain concept labels or Q-matrices**, making them incompatible with traditional CD frameworks. Our task focuses on question-level prediction by modeling correctness-aware interactions through signed hypergraphs, without requiring additional latent or concept-level supervision. While both tasks may produce binary outcomes, **our goal is not to replace cognitive diagnosis but to offer a structure-based, polarity-sensitive alternative with a different modeling philosophy**. >- **[On Comparisons with Other Related Works in Student Ability Modeling]**: We appreciate the recommendation to include comparisons with recent works such as ORCDF [3] or HCD [1]. As we clarified in our previous response, many of these works are based on fundamentally different problem settings (e.g., concept-aware diagnosis or sequential modeling), and thus not directly aligned with our structure-based, question-level prediction task. That said, **we agree that acknowledging and discussing these works in more detail would enhance the completeness of the related work section, and we will incorporate a contextualized comparison in a future revision**. We also believe our additional comparisons with classic hypergraph neural network baselines (i.e., HGNN, HyperGCN, AllDeepSets, AllSetTransformer, ED-HNN, and SheafHyperGNN) already validate the effectiveness of FraS-HNN under fair structural settings. >- **[On the Role of LLMs and the Naming of EduLLM]**: We appreciate the reviewer’s perspective and understand the concern regarding the role of LLMs in our framework. As clarified previously, our primary technical contribution lies in the structural design of FraS-HNN, while LLMs are used as 'off-the-shelf' tools to extract semantic embeddings for MCQs. The name "EduLLM" was chosen not to suggest innovation in LLM modeling itself, but rather to reflect the integration of semantic representations from LLMs with signed hypergraph-based structural modeling for educational applications. We hope that this concise naming does not cause confusion, but instead helps motivate future research into more tightly coupled or co-designed methods that combine LLMs and hypergraph learning, potentially leading to new frameworks for diverse application scenarios. >- **[On Writing Clarity]**: We appreciate the feedback and agree that the motivation could be made more context-aware and tightly linked to existing educational research. While our intention was to provide a structural perspective on modeling correctness-aware interactions, we will revise the introduction to more carefully position our work relative to both cognitive diagnosis and student modeling literature. We also acknowledge redundancy in some sections and will revise the manuscript to improve clarity and organization in the final version. > Finally, we sincerely thank the reviewer for the prompt, detailed, and constructive follow-up discussion. From our perspective, **your comments, particularly from the angle of educational data mining and/or student modeling, complement the feedback from other reviewers**, who primarily focused on the hypergraph representation learning aspects of our work. This diversity of perspectives has helped us better position, clarify, and strengthen the contributions of our paper (**which will be reflected in the updated version**), and we are truly grateful for the thoughtful engagement. If there are any remaining questions, we are happy to engage in further discussion and provide additional explanations where needed.
null
null
null
null
null
null
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
Accept (oral)
Summary: This paper introduces rStar-Math, a novel approach demonstrating that small language models (SLMs) can achieve state-of-the-art mathematical reasoning capabilities without relying on knowledge distillation from larger models. The key innovation is a self-evolving deep thinking framework that enhances the reasoning ability of SLMs through Monte Carlo Tree Search (MCTS). 1.rStar-Math uses SLM-based policy model and reward model to iterative improves problem-solving ability through multiple rounds of self-improvement. 2.Major Innovations: a,Code-Augmented Chain-of-Thought Data Synthesis b,Process Preference Model (PPM) c,Four Rounds of Self-Evolution (solve increasingly difficult mathematical problems) Claims And Evidence: Questions for the claims: rStar-Math Exhibits Intrinsic Self-Reflection Without Explicit Training: Some examples (Figure 4) show that the model backtracks and corrects mistakes, but it is unclear if this behavior is a direct result of MCTS or an emergent property. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of improving math reasoning in SLMs. The combination of MCTS-guided deep thinking, PPM-based process rewards, and iterative self-evolution is novel and well-validated through comprehensive benchmarking. However, high computational costs and the lack of explicit theorem-proving evaluation remain areas for improvement. Theoretical Claims: This is not a theory paper. Experimental Designs Or Analyses: Minor issues: 1.Limited theorem proving evaluation: The benchmarks focus primarily on word problems and algebraic reasoning, but lack an explicit evaluation on theorem proving, despite claims of generalization. 2.Unclear computational costs. Supplementary Material: Yes, I reviewed the supplementary material included in the appendix. Minor issues: No sensitivity analysis on hyperparameters (e.g., effect of different exploration constants in MCTS). Relation To Broader Scientific Literature: 1.Unlike GPT-distilled datasets, rStar-Math bootstraps its own training data via MCTS, eliminating reliance on larger models. 2.rStar-Math improves MCTS by integrating code execution validation and self-evolving data generation, leading to more reliable stepwise reasoning trajectories. 3.Unlike prior PRMs, PPM avoids noisy absolute score annotations and instead uses pairwise ranking loss, improving reward signal quality. Essential References Not Discussed: None Other Strengths And Weaknesses: Weaknesses mainly in computational efficiency and theorem proving evaluation. Other Comments Or Suggestions: None Questions For Authors: 1.No sensitivity analysis of MCTS parameters (e.g., search depth, exploration constant)—it is unclear whether performance saturates beyond 64 rollouts. 2.Computational efficiency of self-evolution is unclear—training costs (weeks of GPU time) may be prohibitive for broader adoption. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >Q1: No sensitivity analysis of MCTS parameters—it is unclear whether performance saturates beyond 64 rollouts. **Response**: Thank you for your thoughtful review and for recognizing our contributions. We sincerely appreciate your suggestions and have conducted additional analysis on MCTS parameters, specifically focusing on the number of candidate nodes per step and the number of rollouts, which we found to be the most influential factors. 1) **number of candidate nodes per step**: To further analyze this parameter, we conducted additional experiments on Math-500 and AIME, testing different candidate node settings (4, 8, 16, 32, 40) under 8 and 64 rollouts. Notably, our paper adopts node=32. As shown in the table below, increasing the number of candidate nodes generally improves accuracy, but beyond 32 nodes, performance saturates. |MATH-500|8 rollouts| 64 rollouts| | :--: | :--: | :--: | | node=4| 87.2 | 88.8 | |node=8| 87.2 | 88.8 | |node=16| 88.4 | 89.0 | |node=32| 89.4 | 90.0 | |node=40| 89.4| 90.0| |AIME|8 rollouts| 64 rollouts| | :--: | :--: | :--: | | node=4| 33.3 | 36.7 | |node=8| 33.3 | 43.3 | |node=16| 36.7 | 50.0 | |node=32| 50.0 | 53.3 | |node=40|46.7|53.3| 2) **number of mcts rollouts**: As mentioned in Section 4.2 (Scaling Up Test-Time Computation), different benchmarks exhibit different trends as rollout count increases. Specifically, Math, AIME, and Olympiad benchmarks saturate at 64 rollouts. For Gaokao and College Math, where performance showed signs of further improvement beyond 64 rollouts, we conducted additional 128-rollout experiments. As shown in the following table, Pass@N scores consistently improve with more rollouts, but gains become marginal beyond 64 rollouts compared to the doubled search costs. On Gaokao, increasing rollouts from 64 → 128 resulted in only a slight improvement (81.3 → 81.6 for pass@1). On College Math, performance fully saturated at 128 rollouts. |benchmark|1| 2| 4| 8| 16| 32| 64| 128| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |Gaokao En (pass@1) | 74.5| 78.4|79.0 |80.5| 80.5|81.0 | 81.3|81.6| |Gaokao En (pass@n) | 74.5| 78.4| 81.8| 83.4| 85.5| 86.8| 87.0| 87.5| | | | | | | |||| |College Math (pass@1) | 55.9| 57.2| 58.0 | 59.0| 59.6| 60.1| 60.5| 60.5 | |College Math (pass@n)| 55.9| 57.2| 60.9| 63.5| 64.9|66.4| 67.6|68.9 | We appreciate the reviewer’s constructive feedback and will incorporate these analyses into our revision. >Q2: Computational efficiency of self-evolution is unclear—training costs (weeks of GPU time) may be prohibitive for broader adoption. **Response**: Thank you for your thoughtful feedback and for highlighting the importance of computational efficiency. We provide a detailed cost breakdown in the appendix and further clarify below. Self-evolution primarily involves two stages: (1) Policy Model & PPM Training; (2) Training Data Generation via extensive MCTS rollouts. As shown in the following tables, training is efficient. Each round completes within a day. | |GPUs|Training time| | :--: | :--: | :--: | |Policy model|8xMI300 | 20 hours | |PPM | 8xMI300 | 15 hours| Training data generation is the main cost, but it is scalable and affordable: (1) From Round 2 onward, since our policy model and PPM are both 7B, they can be served on a single 40GB A100. (2) To process 747K math problems efficiently, we used 15 groups of 4×40GB A100s, completing data generation in ~3 days. (3) Further speedup is feasible: This process scales linearly with more GPUs (i.e., reducing the number of problems assigned per GPU). Round 1 is the only costly stage, as it requires bootstrapping with DeepSeek-Coder-V2, which was done using 8×80GB H100s. |Round |GPUs|Data generation time| | :--: | :--: | :--: | |Round 1|5x8x80GB H100| 2 weeks| |Round 2| 15x4x40GB A100| 2-3 days | |Round 3| 15x4x40GB A100| 2-3 days | |Round 4| 15x4x40GB A100| 1 week | Overall, we believe the computational cost is reasonable and manageable, and the primary bottleneck—data generation—can be further optimized with additional GPUs. We will refine our explanation in the revision to provide a clearer analysis of efficiency and scalability. --- Rebuttal Comment 1.1: Comment: I really appreciate the authors' response! Overall, this is a solid paper. I hope the authors consider including their reply in the appendix—it would add valuable context and clarity. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the encouraging feedback. We will include our response in the appendix as suggested.
Summary: This paper aims to improve the mathematical reasoning capabilities of small LLMs through a self-evolved deep thinking framework, rStar-Math. The method involves three main contributions: (1) a code-augmented CoT data synthesis method; (2) a pairwise training method for the process preference model that avoids direct step-level reward annotations; and (3) a self-evolution recipe that iteratively improves the reasoning capabilities of the LLMs. Experimental results on math reasoning benchmarks show the effectiveness of the proposed method. Claims And Evidence: There is a notable inconsistency that warrants attention. The authors claim that their method achieves **self-evolved** deep thinking **without distillation** from superior models, emphasizing the independence of their approach from larger or more advanced models. This claim is central to the paper's novelty and contribution. However, in Section 3.3, Round 1, the authors state that "we run MCTS with DeepSeek-Coder-V2-Instruct (236B) to collect the SFT data". This use of a much larger model directly contradicts the claim of not relying on superior models for data synthesis. The initial bootstrap round leverages the capabilities of a 236B model, which introduces a form of distillation from a more advanced model. This inconsistency suggests an overclaim in the introduction that is not fully aligned with the methodology described later in the paper. Methods And Evaluation Criteria: - The proposed methods make sense for improving the mathematical reasoning capabilities of LLMs. - The used math reasoning benchmarks are appropriate for assessing the performance of LLMs. Theoretical Claims: The paper does not introduce new theoretical contributions. Experimental Designs Or Analyses: The experimental designs and analyses are sound. Supplementary Material: I reviewed the supplementary material of Additional Experiments and Details. Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature on improving reasoning capabilities in language models. Essential References Not Discussed: The paper has cited relevant prior work. Other Strengths And Weaknesses: Strengths: - The paper is well written and easy to understand. - The code-based cot is interesting. - The experimental results demonstrate clear performance gains. Weaknesses: - The main issue with the paper is the overclaim regarding the "Self-Evolved" nature of the proposed method. The authors claim that their approach does not rely on distillation from superior models. However, the use of a 236B model (DeepSeek-Coder-V2-Instruct) in the initial round of self-evolution directly contradicts this claim. This inconsistency undermines the core novelty of the method and needs to be clarified by the authors. - The paper proposes a code-based CoT approach to enhance math reasoning. While this method is effective for math problems, it raises concerns about domain-specificity. Specifically, the reliance on code execution and Python-based verification may limit the applicability of this approach to other domains where code-based reasoning is less relevant or feasible. The authors should address this limitation by discussing the potential generalizability of their method to other reasoning tasks beyond math. Typo: - In line 362, the figure number is missing. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >Q1: The main issue with the paper is the overclaim regarding the "Self-Evolved" nature of the proposed method. The use of a 236B model (DeepSeek-Coder-V2-Instruct) in the initial round of self-evolution directly contradicts this claim. **Response**: We appreciate the reviewer' feedback regarding our claim of "self-evolved deep thinking". We clarify our approach and demonstrate that self-evolution remains the primary driver of improvement. Notably, the effectiveness of self-evolution is acknowledged by all the other three reviewers. 1. **Clarification of self-evolution**: Our method relies on iterative improvement through MCTS-driven deep thinking. While we use DeepSeek-Coder-V2-Instruct-236B in Round 1 for bootstrapping, **this does not constitute distillation**, as the main role is to provide *an initial dataset* and plays no role in later rounds. The key novelty lies in **Rounds2-4, where we progressively train stronger 7B policy and PPM to improve independently through self-evolution**. The two models did not rely on a superior model *during the learning*, which is the key characteristics that defines model distillation. This is similar to a person who self-evolves the capability through solving problems with known answers (the initial dataset) without other people's help. 2. **Empirical evidence: performance gains from self-evolution**: Table 8 (Appendix A.2) details per-round performance. We also provide the key results below for reference. The results show that: the policy model (round 1) did not match the performance of 236B, even with SFT data. Performance gains primarily occur in Rounds 2-4, driven by self-play and MCTS. The final 7B model outperforms the 236B model and approaches O1-mini's performance. ||MATH| AIME| Olympiad Bench| College Math| | :--: | :--: | :--: | :--: | :--: | |DeepSeek-Coder-V2-Instruct (bootstrap model)| 75.3| 13.3|37.6|46.2| |our policy Round1| 69.6| 3.3| 34.7| 44.5| |our policy Round4| **78.4**| **26.7**| **47.1**| **52.5**| | | | | | | |o1-mini | **90.0** |**56.7** |65.3 |57.8 | | our policy+PPM Round4| **90.0**|53.3 |**65.6** |**60.5** | 3. **The role of round 1: code-augmented CoT format induction**: We use 236B in Round 1 primarily to ensure that our policy model can generate code-augmented CoT traces, not to transfer problem-solving ability. 4. **Distinction from Distillation** : Distillation typically involves a student model mimicking a teacher model throughout training. In contrast: (i) we use 236B only once to generate the initial dataset, (ii) self-evolution proceeds without further reliance on 236B, and (iii) later performance improvements arise purely from self-play. **Our setup is analogous to AlphaGo**, where an initial policy network is required for stability, but key performance gains stem from iterative self-evolution. To clarify, we will revise the paper to emphasize that Round 1 is for data formatting (code-augmented CoT), not capability transfer. We hope this addresses your concerns and welcome further suggestions. >Q2: The paper proposes a code-based CoT approach to enhance math reasoning. it raises concerns about domain-specificity. The authors should address this limitation by discussing the potential generalizability of their method to other reasoning tasks beyond math. **Response**: **Clarification of Scope: focus on math reasoning**: Thank you for raising this point. We would like to emphasize that the primary focus of our paper is math reasoning, as clearly stated in the title. While generalizing to other domains is an interesting direction, exploring those is outside the scope of the current work. As you correctly point out, the potential generalizabililty of our method beyond math is definitely a worth exploring direction in our future work, as we elaborate next. 1. **Code-augmented CoT for code reasoning**: Our code-augmented CoT approach could indeed be valuable for other reasoning tasks, such as code reasoning. Code reasoning shares similarities with math reasoning in terms of structured problem-solving, making this method applicable in such domains as well. 2. **General Applicability of Self-Evolved Deep Thinking & PPM**: Beyond code-augmented CoT, our work’s core contributions — Self-Evolved Deep Thinking and the PPM — potentially offer a more general framework for various reasoning tasks. A key challenge in general reasoning lies in providing feedback to verify whether a trajectory reaches the desired outcome at the end of an MCTS rollout. In code reasoning, this could involve extensive test cases, while in other domains, feedback might come from human annotation or mutual verification with another LLM. Future work will explore these directions. Thank you again for your valuable feedback and suggestions. We hope these responses address your concerns and clarify any confusion, and we kindly ask you to consider re-evaluating our work. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I have updated my score. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our responses. We truly appreciate your thoughtful feedback and your support!
Summary: This paper shows that smaller language models ($\leq7$ billion parameters) can learn to solve challenging math problems at a level comparable to much larger models (e.g., GPT-4 or o1 models). They achieve this by having these smaller models: - Generate and verify each step of a math solution (rather than producing it all at once). - Use Monte Carlo Tree Search (MCTS), guided by a specially trained “process reward model,” to explore different solution steps and pick the best paths. - Iteratively improve themselves (“self-evolution”) by generating new training data and refining both the small “policy” model (the one that proposes solution steps) and the reward model (the one that scores each intermediate step). Through repeated rounds of this procedure, their 7B model eventually surpasses or matches some large commercial models (like OpenAI’s “o1-preview” or “o1-mini”) on difficult math benchmarks. Claims And Evidence: The paper’s main claims—that small models can rival larger ones in mathematical reasoning through iterative self-evolution and process-level reward models—are largely backed by systematic experiments on multiple math benchmarks, ablation studies, and comparisons to existing baselines Methods And Evaluation Criteria: Their proposed method is not as novel as stated, given that (1) the practice of augmenting solutions with comments or code already exists (e.g., program-aided language models), and (2) leveraging Monte Carlo search for data augmentation or self-evolution is a well-known technique rather than a fresh contribution. Finally, their AIME test set is very small (only 15 items), raising concerns that it could be a cherry-picked scenario. Theoretical Claims: They have no theoretical claims Experimental Designs Or Analyses: The authors’ experimental design generally appears sound, particularly in their ablation studies and comparisons on multiple benchmarks, but a closer look at smaller test sets, such as the 15-problem subset of AIME, raises concerns about cherry-picking or insufficient sample size. Supplementary Material: I review the whole Appendix, including A.1-A.4 Relation To Broader Scientific Literature: The methods presented—such as code augmentation, step-by-step reasoning, and Monte Carlo tree search for self-evolution—are already known in the broader literature; however, this paper combines them into a cohesive pipeline. Its primary contribution lies in showing that smaller models when using these techniques, can attain performance on par with larger LLMs like o1 for certain categories of math problems. Essential References Not Discussed: No Other Strengths And Weaknesses: A key strength is that the paper demonstrates even smaller language models can perform on par with larger ones (such as o1) when solving high school–level math problems. However, the proposed code-augmented CoT data synthesis method has two main shortcomings: while it can detect computational and symbolic deduction errors, it cannot identify logical errors (since Python typically won’t produce an exception in those cases), and its verification is limited to tasks manageable via code execution rather than general math proofs, though this may still suffice for datasets like MATH. Other Comments Or Suggestions: No other comments Questions For Authors: In many RL settings, methods that rely heavily on pre-trained or trainable critic models can be easily hacked by LMs. How does your approach mitigate these risks? Could you explain any strategies you employ to ensure the policy cannot simply exploit the critic’s learned features? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >Q1: Clarification on the evaluation benchmarks: "The authors’ experimental design generally appears sound, particularly in their ablation studies and comparisons on multiple benchmarks, but a closer look at smaller test sets, such as the 15-problem subset of AIME, raises concerns about cherry-picking or insufficient sample size." **Response**: Thank you for your thoughtful feedback! We would like to clarify that for AIME 2024, we included both AIME I and II, resulting in 30 problems in total rather than 15. Moreover, our evaluation follows prior works [1,2,3] by selecting AIME 2024 and MATH-500 as key benchmarks, alongside six additional benchmarks, covering a total of 10,195 problems. The following table provides a detailed breakdown of the number of problems in each benchmark. Given this large-scale evaluation, we believe our results are robust and not based on cherry-picked subsets. We will revise the paper to clarify these details. |Total|MATH (500)|AIME 2024 (I&II)| AMC 2023| Olympiad Bench| College Math | GSM8K|GaokaoEn 2023| Omni-Math| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |10195 |500 | 30|40|675|2818|1319|385|4428| [1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning [2] Kimi k1.5: Scaling Reinforcement Learning with LLMs [3] OpenAI o1, learning to reason with LLMs: https://openai.com/index/learning-to-reason-with-llms/ >Q2: the proposed code-augmented CoT data synthesis method has two main shortcomings: while it can detect computational and symbolic deduction errors, it cannot identify logical errors (since Python typically won’t produce an exception in those cases), and its verification is limited to tasks manageable via code execution rather than general math proofs, though this may still suffice for datasets like MATH. **Response**: Thank you for your insightful comment! We would like to clarify that the design of our code-augmented CoT is specifically focused on reducing computational and symbolic deduction errors, rather than logical errors. To address logical errors, we primarily rely on our proposed PPM (process preference model), which is designed to select the optimal step-level reasoning step at each stage, such that the final trajectory reaches the correct answer. Regarding math proofs, as discussed in Appendix A.1 (Generalization to theorem proving), our method shows great potential for application to math proofs, as demonstrated by examples in Appendix A.3. The current limitation is step-level proof verification, which we believe can be addressed with a proof-capable PPM. To overcome this, our future work will focus on collecting step-level proof data to train a dedicated math proof PPM. >Q3: In many RL settings, methods that rely heavily on pre-trained or trainable critic models can be easily hacked by LMs. How does your approach mitigate these risks? Could you explain any strategies you employ to ensure the policy cannot simply exploit the critic’s learned features? **Response**: Thank you for your insightful question, which prompted deeper reflection and discussion on this important issue. Here are our thoughts: 1. **MCTS-driven self-evolution adopts offline updates**: Reward hacking is a known challenge in the **general RL and RLHF**, when the policy is updated online[1,2,3]. In such cases, the policy can exploit flaws in the reward model, leading to increasing reward scores without real performance gains. Our approach uses offline updates for both the policy and reward model. In each self-evolution round, we fix the policy model and PPM (from the previous round) and use MCTS to generate training data offline. This decoupling ensures controlled (real high-quality) data selection and significantly reduces reward hacking. 2. **Robust process-level rewards**: A key cause of reward hacking in general RL settings is the lack of reliably labeled response feedback, requiring a learned reward model for scoring. In contrast, math reasoning allows direct validation against ground truth, avoiding the need for a reward model to score final answers. To avoid potential unreliable reward label annotations during process-level reward, we incorporate: (i) MCTS Rollouts and Backpropagation: Initial step-level Q-values from the PPM are refined via MCTS rollouts, with backpropagation adjusting scores based on how often steps lead to correct answers. (ii) Process Preference Model: Instead of using raw Q-values — often imprecise even with extensive rollouts — we construct preference pairs and train the PPM accordingly, reducing noise and improving robustness. Extensive experiments validate these strategies, showing our approach mitigates reward hacking while ensuring reliable process-level supervision. We welcome any further discussions! [1] Defining and Characterizing Reward Hacking [2] Scaling Laws for Reward Model Overoptimization [3] Reward Shaping to Mitigate Reward Hacking in RLFH
Summary: This work presents a methodology to improve the reasoning performance for math of small language models to levels competitive with the state-of-the-art models. Specifically, the rStar-Math approach uses MCTS along with a verifier by interpreting the reasoning steps as code, in order to train a policy model and a preference model over multiple rounds. The policy model, refined by the preference model, generates reasoning trajectories, and these reasoning trajectories are used to refine the preference model, eventually leading to a policy model capable of significantly enhanced reasoning performance. Among other things, ablations show that the math data thus generated is of inherently high quality even for pure SFT. ## update after rebuttal I maintain my strong recommendation to accept this paper to the conference and thank the authors for the interesting discussion. Claims And Evidence: I think the experiments support the claims made in this paper: the paper provides a recipe that does enable small models to be competitive with reasoning SotA of January 2025. See below for more. Methods And Evaluation Criteria: The methods make sense and the intuition is well supported by past works. The metrics used to evaluate their models are standard and appropriate. Of course, human evaluations, and analysis of efficiency vs production models would have been great to have, but can’t be expected for obvious reasons. Theoretical Claims: N/A Experimental Designs Or Analyses: I think the authors set up their experiments well and ran the ablations I most immediately would have wanted them to. Supplementary Material: I read it all with varying levels of attention. Relation To Broader Scientific Literature: I think this work connects well with the current interest of the community in reasoning, although see feedback below. Essential References Not Discussed: There have been past works using code to verify steps of math reasoning before that are not mentioned/discussed in 3.1, e.g. LEVER, MathCoder (which you cite but do not discuss that aspect of it), etc. Other Strengths And Weaknesses: While this is not required in the context of the reviews, I strongly encourage the authors to share code, models and/or especially the data generated on the usual platforms the foundation model community uses if not done already. Other Comments Or Suggestions: Typos: 1. P1: “ranking among the top 20% the brightest high school math students.” missing word 2. P3: “As a result, scaling up CoT data has diminishing returns, with gains nearing saturation.” I would slightly rewrite to make it clear that you are talking about scaling up by naively generating from frontier models, since after all your approach is also about scaling up CoT data, but in a more refined way. 3. P4: “we perform MCTS rollut” 4. P4: “we use terminal-guided annotation Formally” missing punctuation 5. Eq2: I recommend reintroducing $s_d$ (can be done by adding “$s_d$” after “Terminal nodes” in the last sentence of the paragraph following the equation) 6. P4: “PRM-augmented annotation.” should read PPM 7. Eq 4: $\sigma$ is not introduced, and if you are summing over the 4 combinations of pairwise choices of positive and negative $y_i$, it should be made clearer in the equation 8. P5: “Problems are then categories by difficulty” 9. P7: “Specifically, for AIME/AMC, we generate 16 trajectories for AIME/AMC and 8 for other benchmarks, using PPM to select the best solution.” sentencing, probably solved by removing the first “for AIME/AMC” 10. P7: “srStar-Math” 11. P7: “In Fig. ??” broken ref 12. P8: “rStar-Math can achieve further improvements by collecting more challenging math problems, we leave this as future work.” phrasing (perhaps “We observe that [there can be further improvements with more data], which we leave as future work”) 13. P13: “Training PPM” missing “the” Suggestions: 1. P4: you evoke MCTS backpropagation. I suggest going into a bit more detail about that, perhaps in the appendix, as in my experience readers of practical LLM training methods may not be familiar with that. 2. Tab3: “Specifically, for AIME/AMC, we generate 16 trajectories for AIME/AMC and 8 for other benchmarks, using PPM to select the best solution.” I recommend adding a version of that to the table’s caption Questions For Authors: 1. P3: “Candidates that execute successfully are retained as valid nodes and scored by the PPM, which assigns a Q-value” it is not clear to me why the PPM assigns the Q value as page 2 indicates that the Q value is generated separately from the number of trajectories leading to the correct answer for a given node, while the PPM is trained *from* the Q value; could you clarify that point (here and ideally either in page 2 or 3 depending on your answer)? Currently the flow of p2 and fig 1c makes it seem like the code-augmented component and Q value assignment of your pipeline happens already before (and is a prerequisite for) training the PPM. 2. Fig 2: is it intended that the python code for step 2 repeats the python code for step 1 first? 3. P5: “In each selfevolution round, we perform 16 rollouts per math problem, which leads to 16 reasoning trajectories. “ considering multiple steps and multiple future trajectories per step in rollouts, wouldn’t that lead to more than 16 reasoning trajectories? Unless you discard all but the highest scored one (which seems to be what is suggested later in the paper, in which case I’d clarify that in the paragraph). 4. Tab6: could PQM be underperforming due to PPM being more attuned to the policy model since it stems from several rounds of iterating? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive feedback on our work. We sincerely appreciate your insights and your recognition of our contributions. Below, we address your specific comments. >Q1: I strongly encourage the authors to share code, models and/or especially the data generated on the usual platforms the foundation model community uses if not done already. **Response**: Thank you for your suggestion. We are committed to releasing the code and the generated data as soon as possible. >Q2: P3: “Candidates that execute successfully are retained as valid nodes and scored by the PPM, which assigns a Q-value” it is not clear to me why the PPM assigns the Q value as page 2 indicates that the Q value is generated separately from the number of trajectories leading to the correct answer for a given node, while the PPM is trained from the Q value; could you clarify that point (here and ideally either in page 2 or 3 depending on your answer)? Currently the flow of p2 and fig 1c makes it seem like the code-augmented component and Q value assignment of your pipeline happens already before (and is a prerequisite for) training the PPM. **Response**: Thank you for your question! We acknowledge that the current description may cause some ambiguity, and we will revise the paper to clarify this process. To clarify: as shown in Fig. 1(c), in the first and second rounds of data generation, we did *not* have a PPM yet. Instead, we assigned Q-values to each candidate based on terminal-guided annotation. This Q-value-labeled data was then used to train the PPM. Starting from the third and fourth rounds, the PPM became available, enabling PPM-augmented Q-value annotation. Specifically, during MCTS, each newly generated node is initially assigned a Q-value predicted by the PPM (whereas in the first two rounds, the initial Q-value was set to 0). This Q-value is then refined through MCTS rollouts. We will update Page 3 to make this process clearer. >Q3: Fig 2: is it intended that the python code for step 2 repeats the python code for step 1 first? **Response**: Yes, this design is to ensure that the Python code executes correctly. Specifically, Step 2's code may depend on variables or functions defined in Step 1. If a node corresponding to Step 2 were to execute only its own Python code without including the preceding steps, it could lead to syntax or execution errors due to undefined references. >Q4: P5: “In each selfevolution round, we perform 16 rollouts per math problem, which leads to 16 reasoning trajectories. “ considering multiple steps and multiple future trajectories per step in rollouts, wouldn’t that lead to more than 16 reasoning trajectories? **Response**: Thank you for your question! Our definition of rollout follows prior works [1,2,3], where a full rollout refers to a complete reasoning trajectory from the root node (question) to a terminal answer node. Thus, each rollout consists of multiple reasoning steps and explores different paths, ensuring that performing 16 rollouts per math problem yields at least 16 full reasoning trajectories. We appreciate your careful reading and will revise the paragraph to make this clearer. [1] Reasoning with language model is planning with world model, emnlp 2023 [2] Mutual reasoning makes smaller LLMs stronger problem-solvers, iclr2025 [3] LLaMA-Berry: Pairwise optimization for olympiad-level mathematical reasoning via o1-like monte carlo tree search >Q5: Tab6: could PQM be underperforming due to PPM being more attuned to the policy model since it stems from several rounds of iterating? **Response**: Thank you for your insightful question! The primary reason PQM underperforms compared to PPM is that using score-based Q-values as direct labels for the process reward model inherently introduces imprecision. Although extensive MCTS rollouts help improve Q-value accuracy, these Q-values struggle to differentiate fine-grained quality levels. For example, distinguishing an optimal step (score=1.0) from a near-optimal step (score=0.9) is inherently challenging, whether through MCTS automatic annotation or human annotation. This inevitably introduces noisy scores in PQM's training data, affecting its reliability. To fully address your question, we conduct an additional experiment. We use the generated data from SLM-r1 (our first trained 7B policy SLM from round 1) to train both PQM and PPM. As shown in the table, PPM consistently outperforms PQM from the early stages of self-evolution. This empirically demonstrates that PPM's advantage stems from its superior design, which effectively mitigates the impact of noisy Q-value scores. ||MATH|AIME 2024| AMC 2023| Olympiad Bench| College Math | GaokaoEn 2023| | :--: | :--: | :--: | :--: | :--: | :--: | :--: | |SLM-r1+PQM (trained by SLM-r1 generated data)| 82.4 | 23.3|**75.0** |49.6|52.9|70.3| |SLM-r1+PPM (trained by SLM-r1 generated data)| **84.0**| **26.7**| **75.0**| **52.7**| **54.2**| **73.0**| --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed answer and updating paper with these clarifications / experiments / corrections of typos. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your kind acknowledgment. We will update the paper accordingly. Thank you for your valuable feedback!
null
null
null
null
null
null
To Each Metric Its Decoding: Post-Hoc Optimal Decision Rules of Probabilistic Hierarchical Classifiers
Accept (poster)
Summary: This papers investigates the problem of Bayes-optimal prediction in hierarchical multi-class classification. A decision-theoretic framework is assumed, where probabilities are estimated during training, and the Bayes-optimal prediction is computed at test time. The authors consider three types of settings, where the output of the Bayes-optimal classifier is a leaf, a tree node, or a set of tree nodes, respectively. For the three settings, they show how Bayes-optimal predictions can be obtained for various loss functions. In the experiments, Bayes-optimal and heuristic decodings are compared to each other, showing that Bayes-optimal decodings give better results. Additional experiments are conducted to show that the advantage of Bayes-optimal decondings becomes bigger for specific probability distributions. Claims And Evidence: I think that most claims are supported by convincing evidence. However, the claim in Theorem 4.7 is hard to believe. I think that this claim can only hold if additional restrictions are made for the loss function. For the hierarchical F-measure, this claim is probably true, but I would be surprised that the claim holds for the hierarchical Jaccard-measure (as far as I know, this measure has not been used in hierarchical classification, but it only differs from the F-measure via an extra term in the denominator). Some authors, such as Chierchetti et al. have claimed that optimal decoding for the Jaccard is and NP-hard problem. F. Chierichetti, R. Kumar, S. Pandey, and S. Vassilvitskii. Finding the Jaccard median. In Proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 293–311, 2010. So, I think that Theorem 4.7 has to be reviewed by the authors. Theorem 4.4 also looks a bit weird to me. It says that the Bayes-optimal prediction can be computed in O(d_max x |L| + |N|) time, but the the prediction is a single node, so a brute-force algorithm that investigates all nodes is also linear in the number of nodes? What I am missing here? Overall I find the paper not very easy to follow, because essential information is put in appendices. I would have liked to see the algorithms in the main paper. Section 3 could be written in a more compact manner by removing the definition environments for basic concepts. It is also not very easy to assess what the key novelty is. The paper considers a rather general setting, with three types of predictions, for which theoretical results have been established in previous works. In Section 4 I was missing a clear description of what is novel, and what is incremental to previous papers. Methods And Evaluation Criteria: For me it is a weird choice to use flat classifiers during training. These classifiers ignore the hierarchy, but more specialized probabilistic models have been proposed in the past, such as nested dichotomies, probabilistic classifier trees and neural networks with hierarchical softmax output layer. It is unclear whether the proposed algorithms can be used together with hierarchical probabilistic classifiers. The authors only analyze two datasets. I find this quite limited, because there are other hierarchical classification datasets available from several domains (vision, text, biology, etc. ). Having more datasets could be useful to study the issues reported in Section 5.2 and 5.3 more in detail. More specifically, the authors report that the improvement of Bayes-optimal decodings over heuristic is problem dependent. Similar issues have been reported in multi-label classification. For example, the improvement of Bayes-optimal versus heuristic algorithms for the F-measure depends on the underlying probability distribution. For the case where labels are independent, heuristic algorithms work better. Perhaps similar phenomena exist in hierarchical classification, which could also be studied in a more theoretical manner. Theoretical Claims: See above. Experimental Designs Or Analyses: See above. Supplementary Material: I have to review 6 papers for ICML, so I don't have time to read appendices. Relation To Broader Scientific Literature: I believe that the topic of the paper is relevant, but it is still a bit unclear to me what's novel and what's incremental to previous work. Essential References Not Discussed: I think that the most important references are included. Other Strengths And Weaknesses: Strengths: - I think that the paper contains novel theoretical results, but I am not sure. - The paper is relatively well written, but some improvements can be made. Weaknesses: - The story is a bit unclear. - I am not convinced of some of the theoretical results. - More datasets in the experiments could be useful. Other Comments Or Suggestions: No. Questions For Authors: See above. For the moment I am voting for rejecting the paper, but I might change my mind after the rebuttal phase. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and insightful points. Below, we address each question concisely. *On Theorem 4.7 and the Jaccard Index* We would like to clarify that Theorem 4.7 and Section 4.3 **only** applies to hFβ-score and not to the Jaccard Index, or any other metric. As stated, Th. 4.7 is correct, though we will refine its formulation for clarity. During our research, we did attempt to extend Th. 4.7 to the Jaccard Index but were unsuccessful. This highlights the complexity of directly optimizing the Jaccard Index, which is indeed known to be an NP-hard problem. The reviewer's observation on the link between F1 and Jaccard aligns with Waegeman et al. (2014), who provide theoretical bounds on the regret of using F1 decoding as a surrogate for the Jaccard Index. This suggests that Th. 4.7 could offer an alternative to bypass the intractability of Jaccard Index decoding in hierarchical classification. Overall, and as noted by the reviewer, we also decided not to use the Jaccard Index as a metric due to its limited adoption in the context of hierarchical classification. *On Theorem 4.4 and Complexity* A brute-force algorithm that evaluates all nodes has a time complexity of O(N * L), not O(N). In fact, for each node n, computing the expectation E[L(n,Y)]= Σ_l p(l) L(n, l) requires O(L) operations. Our O(d_max * L + N) algorithm therefore improves upon the O(N * L) brute-force search. *On Paper Readability and Algorithm Placement* We acknowledge the clarity concern and will integrate key algorithms into the main text for better readability. *On Basic Concept Inclusion* In fact, we had a hard time deciding what to include and what to exclude. One of our goals was to make the paper accessible to the hierarchical classification community, which may not be deeply familiar with Bayes-optimal decoding concepts as it is often an overlooked aspect in empirical research on hierarchical classification. This led us to include some fundamental explanations which may appear basic for an initiated reader. Still, we will try to figure out a way to make it more straightforward, as the reviewer suggests. *On Novelty in Section 4* We acknowledge this remark and will clarify contributions by adding an introduction to Section 4. Key novelties are: **Node candidate set (Th. 4.4)**: A general optimal decoding algorithm for *hierarchically reasonable* metrics, improving complexity to O(d_max * L + N) from O(N * L).\ **Subset of nodes candidate set (Th. 4.7)**: An optimal decoding algorithm for hF_β scores in O(d_max² * N), improving on intractable brute-force search. *On Training with Flat/Hierarchical Classifiers and applicability of our decoding methods to hierarchically-aware probability estimates* Our decoding algorithms **do not make any assumptions** about the probability estimation procedure. They only require a probability distribution over leaves, which can be obtained from either a flat or a hierarchically-aware classifier. In practice, **we did use hierarchical-aware classifiers**. As detailed in Subsubsection Models of Section 5, our experiments included several hierarchy-aware classifiers, including the hierarchical softmax (referred to as conditional softmax, as in its original introduction). The decision to use flat classifiers from the PyTorch library was primarily driven by their applicability to our task as well their widespread availability, which facilitated the scalability of our experiments. However, we acknowledge that our selected models did not include nested dichotomies or probabilistic classifier trees. *On Dataset Scope* While we acknowledge that additional datasets could have been included, we believe the current scope is already comprehensive. As reported, we provide results for 6 metrics, compare our methods against 8 baselines, and evaluate 19 different models across 2 datasets. We draw inspiration from recent literature (Bertinetto et al., 2020; Kharthik et al., 2021; Valmadre, 2022; Garg et al., 2022; and Jain et al., 2023), which focuses on these 2 datasets. That said, we have initiated experiments on a text dataset (Web of Science, Koswari et al., 2018), and we will consider adding them in the paper. *On Problem-Dependence of Bayes-Optimal Decoding* The phenomenon highlighted by the reviewer regarding the dependency of improvement on the underlying distribution closely aligns with our experiment using progressive blurring of images. By applying blur, we shift the underlying probability distributions towards the center of the simplex. As shown in Figure 4, increased blurriness further emphasizes the importance of optimal decoding. We agree that the theoretical study of the regret associated with using a heuristic algorithm versus the optimal algorithm is an interesting topic, as explored in works such as Waegeman et al. (2014). While such theoretical considerations are indeed challenging, we view them as a future research direction. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. It made a few points more clear to me. --- Reply to Comment 1.1.1: Comment: As we come closer to the end of the rebuttal, we kindly ask the reviewer to reconsider their score in light of the addressed concerns.
Summary: This paper tackles optimal decoding in hierarchical classification by developing universal algorithms for hierarchically reasonable metrics and a specialized algorithm for hF$_β$-scores. These methods, designed to find the best prediction given a posterior probability distribution, are particularly effective in underdetermined classification tasks, as demonstrated empirically. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: Figure 2 and Tables 3 through 9 lack error bars and statistical significance testing. Therefore, I cannot determine the significance of the reported results. Supplementary Material: No. Relation To Broader Scientific Literature: The discussion of related work in Section 2 appears reasonable. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The proposed method assumes that the exact posterior probability distribution is given, which is a significant limitation. In fact, I believe that estimating the posterior probability distribution is more challenging than the 'Bayes-optimal decoding' itself. Therefore, I do not consider the method presented in this paper to be significant. Other Comments Or Suggestions: The visibility of Figure 2 is poor. Questions For Authors: 1. Could the authors add the missing error bars and statistical significance testing for the experiments? 2. How does the proposed method compare to prior work, for example, Cao et al. (2024), when used with the tree distance loss and its generalized forms? ## Update after rebuttal The reviewer has addressed several of my earlier concerns, which has helped me better appreciate the main contributions of the paper. However, one important issue remains unresolved: the comparison between the proposed algorithm and that of Cao et al. (2024)—particularly in terms of the tree distance loss and its generalized variants. Despite my explicit request in the review, the authors did not provide any empirical results on this point. I consider this a significant omission, and therefore, I maintain my current score. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback even though we find the review quite unfair and not very detailed. We try to answer the few elements you point out in the review. *The proposed method assumes that the exact posterior probability distribution is given, which is a significant limitation.* We respectfully disagree with the reviewer on this point. We develop theoretical algorithms that are, in fact, based on the knowledge of the exact posterior probability distribution. However, in all our experiments, that restriction is relaxed, and we apply these algorithms to the estimated posterior probability distribution. We then show the empirical superiority of these theoretical algorithms. *In fact, I believe that estimating the posterior probability distribution is more challenging than the 'Bayes-optimal decoding' itself. Therefore, I do not consider the method presented in this paper to be significant.* These are, in fact, two distinct research directions, and we believe both are interesting but they also can be seen as orthogonal problems. We strongly believe that Bayes-Optimal Decoding is worth investigating and is significant for several key reasons, which we outline below. First, the use of pre-trained models is becoming increasingly common and many users lack the computational resources or expertise to train these models themselves: knowing how to correctly decode these pre-trained models for a given application is therefore crucial. Second, the costs associated with misclassification errors are not necessarily known during training. Lastly, these misclassification costs are of course context-dependent across applications, and can vary through time. Our paper thus aims to provide a general framework for decoding hierarchical classifiers with respect to any specified metric. *The visibility of Figure 2 is poor.* We will improve the visibility of Figure 2 for better clarity. *Could the authors add the missing error bars and statistical significance testing for the experiments?* Figure 2 is a boxplot, which already provides information about the distribution of results across models. However, it is true that we did not run multiple experiments for each of the 19 models tested, and there are several reasons for this. First, unlike learning algorithms, our decoding algorithms are deterministic, meaning the same probability will always produce the same prediction, regardless of the context. Second, when using pre-trained models, they come with a single checkpoint and a predefined test set, making it uncommon to report confidence intervals in this setup. Lastly, for fine-tuned models with hierarchical losses, we could have conducted multiple trainings (with different seeds), but we chose not to, in order to avoid creating a dissymmetry with the pre-trained models. *How does the proposed method compare to prior work, for example, Cao et al. (2024), when used with the tree distance loss and its generalized forms?* The comparison of our newly introduced Bayes-optimal decodings to prior work is presented throughout the experimental section (Section 5). We have selected 8 baselines for comparison, as shown in Figure 2. Cao et al. (2024) provide a theoretical result on Bayes-optimal decoding for the generalized version of the tree distance loss. We propose a more general framework and recover the Cao et al. result with Lemma 4.3, as discussed in Section 4.2.1. In our empirical results, we chose to use only the Tree Distance Loss rather than its generalized version, due to its limited adoption in the hierarchical classification literature. However, if we had used the generalized version, our algorithm would have performed equivalently to Cao et al.'s closed-form decoding, as both are optimal.
Summary: In this paper, the authors study the problem of hierarchical classification, i.e., a variant of multiclass classification problem with a predefined label hierarchy. The main focus of this paper is the decoding of optimal prediction w.r.t. a family of performance metrics called hierarchically reasonable metric from the estimated class probability. The authors first show that the family of performance metrics include a number of existing metrics, and then provide general decoding algorithms with optimality guarantees. Extensive experiments on different real-world hierarchical classification benchmarks and different metrics demonstrates the effectiveness of the proposed decoding method. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The selected benchmark datasets are well-suited for hierarchical classification scenarios, while sample blurring effectively simulates the increasing “hardness” of samples. Theoretical Claims: The claims are valid and rigorously proved. Experimental Designs Or Analyses: The experiment designs and analyses are valid and of real-world significance. Supplementary Material: I reviewed Appendix B to check the relation between this work and existing metrics. Relation To Broader Scientific Literature: The prior related findings, including optimality analyses of certain hierarchical classification metrics and their heuristic decoding, are naturally related to this work since they can be seen as the special case of the proposed method. Essential References Not Discussed: The related works are thoroughly discussed and compared in Section 2. Other Strengths And Weaknesses: 1. This work provides a unified framework of eliciting Bayes optimal prediction of from class probability, which is flexible and can be combined with arbitrary proper scoring rule for class probability estimation. 2. The perspective of this work is also novel in that it focuses on the inference of optimal prediction over potentially complex prediction space, which is often neglected in the research of traditional multiclass/multilabel classification. Other Comments Or Suggestions: it is encouraged to include the proposed algorithms or provide a more detailed description of them in the main body. Questions For Authors: 1. According to the problem formulation, the class probability of a label can be non-zero only if it is a leaf node of the node hierarchy, while [1,2] is free of this assumption. Indeed, this condition implicitly assumes that the hierarchy is a 'complete' one: an instance belong to a superclass/internal node means that it must be classified into its leaf descendant, which may be a strict condition. It can be helpful discuss the proposed methods without this assumption. 2. While this framework provides theoretically grounded decoding methods from class probability, the estimated 'flat' class probability may not be perfect and may affect the performance of the decoding methods. [1]. Ramaswamy, H.,Tewari, A., and Agarwal, S. Convex calibrated surrogates for hierarchical classification. ICML'15 [2]. Cao,Y., Feng,L., and An,B. Consistent hierarchical classification with a generalized metric. AISTATS'24. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We warmly thank reviewer k32x for its positive feedback and very insightful comments. We try below to answer your questions. *it is encouraged to include the proposed algorithms or provide a more detailed description of them in the main body.* As the reviewer suggests, we will include the proposed algorithms within the core of the paper. *According to the problem formulation, the class probability of a label can be non-zero only if it is a leaf node of the node hierarchy, while [1,2] is free of this assumption. Indeed, this condition implicitly assumes that the hierarchy is a 'complete' one: an instance belong to a superclass/internal node means that it must be classified into its leaf descendant, which may be a strict condition. It can be helpful discuss the proposed methods without this assumption.* We thank the reviewer for this insightful question. We acknowledge that our framework may appear more restrictive than that of [1,2], and this was a key consideration in our research. One practical way to relax this assumption while remaining within our framework is to modify the hierarchy **before** training. Specifically, we can introduce a "stopping" node (a leaf node) at **each** internal node n of the hierarchy. The likelihood of this stopping node would represent the probability of belonging to any subcategory of node n, **excluding** the children of n already present in the hierarchy. This effectively "completes" the hierarchy, making it more general, though it prevents us from using pre-trained models and also introduces additional technical considerations and modifications to the hierarchy structure. In such a case, our theorems remain valid. Lastly, we also adopted our current setup to align with recent work in the field, including Karthik et al. (2021) and Jain et al. (2023), and because the considered datasets fulfill the leaf node label assumption (meaning each instance is categorized as a leaf). We appreciate this discussion and will consider incorporating these clarifications into the paper. *While this framework provides theoretically grounded decoding methods from class probability, the estimated 'flat' class probability may not be perfect and may affect the performance of the decoding methods.* Again, this represents a very interesting subject for which we plan to explore further in future work. Specifically, one promising direction is to investigate how to adjust either the optimal strategy or the probability distributions when accounting for the fact that these estimations are imperfect.
Summary: The paper introduces a framework for optimal decoding in hierarchical classification, where predictions are structured in a tree-like taxonomy. Unlike standard classification, the severity of errors varies based on the distance in the hierarchy between the predicted and true labels. Most existing methods use heuristics (like argmax or thresholding) for decoding model outputs, which may not align with the evaluation metric. The authors propose post-hoc optimal decision rules tailored to specific hierarchical evaluation metrics (e.g., hFβ score, Tree Distance Loss, Wu-Palmer). Experimental results show that the optimal decoding method outperforms heuristics by 1–5%, and up to 10% for mistake severity across various datasets and models. A higher value of the method is demonstrated in uncertain scenarios, like when input images are blurred, where simple decoding baselines struggle. Claims And Evidence: All main claims are supported by clear and convincing evidence, such as: 1. The proof of algorithm’s optimality, which is novel, and a solid contribution in this area; 2. The experiments showing improvement over the baselines, which follows from the optimality principle; 3. Demonstration of the method at various levels of classifier quality, defined by the image blurring setup, clearly explaining that the algorithm works the best in cases when the original classifier is not accurate. One comment is regarding the runtime estimations, it would make sense to add the runtime based on the number of inference examples, and with that include the pre-processing complexity of Algorithm 1 to overall computations. Methods And Evaluation Criteria: The proposed evaluation methods are sufficient and fully explore the leaf/node decoding strategies, showing advantages and pitfalls (related to reduced method’s value vs simple baselines in case of accurate classifiers). The only missing item, as acknowledged by the authors, is the exploration of the set of nodes decoding strategy, and the paper could benefit from some experimental study on this. However, this is a more complicated and less practical scenario, so in my opinion it doesn’t diminish the value of the paper. Theoretical Claims: All theoretical claims appear to be valid, providing solid results in reasonable assumptions on the loss hierarchy. Experimental Designs Or Analyses: The experimental setup is methodologically sound, diverse, and directly aligned with the paper’s claims. The paper could benefit from tests on larger scale datasets, but it is not necessary. Supplementary Material: I checked the algorithm and glanced through the proofs and experimental studies, all appearing to be valid. Relation To Broader Scientific Literature: From the broader context, the paper doesn’t consider learning scenario, only taking a given classifier output (it is acknowledged by the authors). From that perspective there is a pool of works which integrate label hierarchy into the training process, from hierarchical softmax and its variations, to the multi-label scenarios. As mentioned by the authors, it would be really interesting to see how the framework can be incorporated in model training. Essential References Not Discussed: All references essential for the paper, such as [Karthik, 2021] and [Ramaswamy, 2015] are discussed. Potentially, the authors could relate the work to structured prediction, as in (Kulesza, 2007): Structured learning with approximate inference. Other Strengths And Weaknesses: Overall, this appears to be a clear and well–structured paper with solid contribution, walking through all necessary details, with a comprehensive experimental study. Some mentioned shortcomings, such as absence of empirical study for this set of nodes decoding strategy, or not applying the strategy in training, to me do not diminish the paper value and can be considered as future work. Other Comments Or Suggestions: No other comments Questions For Authors: No other questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We warmly thank the reviewer maE5 for their positive feedback and their acknowledgement of the significance of our work. We answer below to your different points. *One comment is regarding the runtime estimations, it would make sense to add the runtime based on the number of inference examples, and with that include the pre-processing complexity of Algorithm 1 to overall computations.* Indeed, varying the total number of test instances and analyzing the overall runtime evolution, including the preprocessing time of Algorithm 1, would provide a more comprehensive evaluation. We acknowledge that Table 9 in Appendix C.3 does not currently account for this. We will conduct this experiment and include the results in Appendix C.3. *The only missing item, as acknowledged by the authors, is the exploration of the set of nodes decoding strategy, and the paper could benefit from some experimental study on this. However, this is a more complicated and less practical scenario, so in my opinion it doesn’t diminish the value of the paper.* The reviewer is correct when pointing out that we did not develop **heuristic** decoding strategies for *subset of nodes* decoding, which would have ensured a fair comparison with our **optimal** *subset of nodes* decoding strategies for the $hF_{\beta}$-score of Section 4.3. We did attempt to design a simple heuristic based on thresholding derived from Lemma 4.6, but it resulted in poor performance. *Kulesza, 2007: Structured learning with approximate inference.* This is indeed a reference we missed and which is relevant to our problem. We will incorporate it in the paper.
null
null
null
null
null
null
Pruning Spurious Subgraphs for Graph Out-of-Distribtuion Generalization
Reject
Summary: This paper proposes PrunE, a pruning-based method designed to address the challenge of out-of-distribution (OOD) generalization for Graph Neural Networks (GNNs). Rather than focusing on directly identifying invariant subgraphs, PrunE prunes spurious edges to preserve the invariant subgraph. The method uses two key regularization terms: a graph size constraint to exclude uninformative edges and ϵ-probability alignment to suppress spurious edges. Theoretical analysis and extensive experiments demonstrate that PrunE outperforms existing methods for OOD generalization across multiple benchmark datasets. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, the proofs for theoretical claims are correct. Experimental Designs Or Analyses: Yes, the experimental designs and analysis of the paper are reasonable. Supplementary Material: Yes, I noticed that the author provided additional experimental results and theoretical proof in the supplementary materials. Relation To Broader Scientific Literature: This work contributes to a deeper understanding of subgraph pruning in graph-based learning tasks, and its success in enhancing OOD generalization could inspire future research on pruning techniques for large-scale, dynamic, and self-supervised graphs. However, the starting point of this paper bears similarities to previous work[1], making its novelty questionable. [1] Learning Graph Invariance by Harnessing Spuriosity. ICLR 2025 Essential References Not Discussed: The paper could cite more works, such as LIRS[1]. This work adopts a similar approach at the representation level, and removing environment representations at the representation level may be more efficient than doing so at the structural level. [1] Learning Graph Invariance by Harnessing Spuriosity. ICLR 2025 Other Strengths And Weaknesses: 1.The paper presents a novel pruning-based approach to OOD generalization, which is a significant departure from prior methods that focus on direct invariant subgraph identification, leading to better retention of meaningful structural information. 2.Theoretical justifications are well-developed and clearly presented, providing formal guarantees that pruning spurious edges improves OOD generalization. 3.Extensive experiments across multiple datasets demonstrate strong empirical performance, with significant improvements over baseline methods in both synthetic and real-world settings. Other Comments Or Suggestions: 1.I believe this paper lacks novelty. The motivation of the paper is that directly predicting invariant subgraphs from the graph structure is difficult, so the authors propose using certain techniques to predict the environmental subgraphs and then remove the environmental structure from the graph to identify the invariant subgraphs. However, directly predicting from the structural end seems inefficient, and similar research has already been conducted from the representation end. This makes the paper appear as a product of a feasible but not particularly innovative approach. 2.This paper does not seem to compare its performance with LIRS, a study that shares a similar motivation. Since both papers address similar objectives, I suggest that the authors include LIRS as a baseline method for comparison if possible. Lack of hyperparameter analysis and guidelines for hyperparameter tuning. 3.The paper does not appear to provide an analysis of the time and space complexity of the proposed model. Similarly, the paper does not provide a comparison of the time and space consumption of the proposed model against other baseline methods. This omission may leave readers uncertain about the efficiency and memory requirements of the model Questions For Authors: Regarding the questions, please refer to the weaknesses section in the strengths and weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! Please see below for our responses to your comments and concerns. > **Q1: Novelty Issue** __Response:__ One significant distinction between the proposed PrunE method and most existing OOD approaches lies in its learning paradigm. Specifically, PrunE focuses on pruning uninformative spurious edges, rather than directly identifying invariant subgraphs or explicitly learning invariant features, a strategy commonly employed by most prior methods such as IRM, VRex, DIR, GSAT, CIGA, and AIA. Therefore, we respectfully emphasize that PrunE differs from the majority of existing OOD methods in both its learning paradigm and the underlying OOD objectives. We thank the reviewer for pointing out the related work LIRS which was recently accepted. While PrunE and LIRS share some conceptual commonality, their motivations and methodology differ significantly. - **Different Motivations.** LIRS aims to learn a more complete set of invariant features through a representation-level approach. In contrast, PrunE is motivated by the difficulty of directly identifying invariant subgraphs. It addresses this by pruning uninformative spurious edges, which facilitates the preservation of invariant substructures. - **Technical Design.** LIRS adopts a multi-stage learning paradigm that learning spurious features followed by learning invariant features. In contrast, PrunE employs a single-stage training framework with two novel OOD regularization terms that are distinct from prior work. Compared with LIRS, PrunE presents several unique advantages: - **Single-stage training and fewer hyperparameters.** LIRS involves multiple stages, with __nearly 100 hyperparameter combinations__ in total; In contrast, PrunE demonstrates robust performance with limited set of hyperparameters (as discussed in line 183 and 212) across datasets, greatly reducing model tuning efforts. - **Interpretability.** LIRS operates in latent space and thus lacks interpretability in terms of input structures. PrunE, by operating in the input space, not only being efficient and effective, but also offers interpretability by identifying critical subgraphs that explain the model prediction. In summary, PrunE is technically and conceptually distinct from LIRS with different motivations, and offers several unique advantages. We appreciate the reviewer’s suggestion and will include a discussion and comparison with LIRS in our revised paper. > **Q2: Comparison with LIRS and hyperparameter analysis** __Response:__ Thank you for your thoughtful question. LIRS and PrunE exhibit notable differences in performance across datasets with different features. Specifically, on the *Motif-Base* dataset, PrunE achieves 91.40% accuracy, significantly outperforming LIRS (75.50%). In contrast, on the *Motif-Size* dataset, LIRS performs better than PrunE. This also hightlights the different inductive bias between these two methods. We will add the comparison with LIRS to our revised paper. Regarding hyperparameter sensitivity, PrunE achieves strong performance using a fixed hyperparameter setting, thereby alleviating the need for hyperparameter search. For further details, please refer to our response to Reviewer `ncwH` due to character limits. > **Q3: Time and space complexity analysis** __Response:__ As discussed in Appendix E, the time complexity of PrunE is $\mathcal{O}(CkmF)$, where $k$ is the number of GNN layers, $m$ is #edges, and $F$ is the feature dimension. $C>1$ accounts for the use of both the subgraph selector $t(\cdot)$. The space complexity is $\mathcal{O}(C'|\mathcal{B}|mkF)$, where $|\mathcal{B}|$ is the batch size and $C'$ reflects the additional memory from $t(\cdot)$. The time and memory cost are both on par with ERM. To further address the reviewer's concern, we conducted additional experiments to evaluate its runtime and memory consumption as below. | Memory consumption (in MB) | Motif-base | Molbbbp | |:---:|:---:|:---:| |ERM|40.62|32.43| |IRM|51.76|36.19| |VRex|51.52|35.92| |GREA|103.22|76.28| |GSAT|90.12|58.02| |CIGA|104.43|72.47| |AIA|99.29|81.55| |LIRS|89.15|107.37| |PrunE|74.15|61.07| || |Running time (in seconds)|Motif-base|Molbbbp| |:---:|:---:|:---:| |ERM|494.34 ± 117.86|92.42 ± 0.42| |IRM|968.94 ± 164.09|151.84 ± 7.53| |VRex|819.94 ± 124.54|129.13 ± 12.93| |GREA|1612.43 ± 177.36|262.47 ± 45.71| |GSAT|1233.68 ± 396.19|142.47 ± 25.71| |CIGA|1729.14 ± 355.62|352.14 ± 93.32| |AIA|1422.34 ± 69.33|217.36 ± 11.04| |LIRS|504.87 ± 24.04|421.32 ± 19.86| |PrunE|501.62 ± 7.64|133.35 ± 3.47| || As PrunE only introduces two lightweight regularization terms on the subgraph selector, it is highly efficient in both runtime and memory consumption (**3.15x faster** than LIRS in Molbbbp), highlighting its advantage in computational efficiency. --- We sincerely thank the reviewer for the careful review and insightful feedback. We hope that our responses have adequately addressed your concerns regarding our study. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author response to my review and will update my review in light of this response as necessary.
Summary: In this paper, the authors study the problem of graph-level out-of-distribution (OOD) generalization. Their key claim is learning a more sparse graph structure from the vanilla graph by pruning those spurious edges, which they show is effective in preserving the invariant substructure and thus beneficial for OOD generalization. In implementation, the authors adopt a learnable subgraph selector, which assigns each edge in the graph with a learnable weight. By the proposed loss function, the model is required to make the summation of these weights smaller than the number of edges in the vanilla graph. They further design another loss to align edges with the lowest weights to a small value in order to suppress spurious edges. The authors provide theoretical justification of the proposed method and conduct comprehensive experiments to verify its effectiveness. *** **Update after Rebuttal** Thanks the authors for their responses, which have adequately addressed my concerns. Currently, I have no other concerns. I have raised my score to 4. Claims And Evidence: The claim in this work is clear and reasonable. The authors have provide detailed analysis to demonstrate why pruning spurious edges is helpful for OOD generalization. Intuitively, by assigning spurious edges with smaller weights, the model could focus more on the subgraph structure that is invariant under distribution shift, and thus it could have better OOD generalization ability. Methods And Evaluation Criteria: The experiments are conducted on benchmark datasets, and the experimental setting follows previous studies. The authors also provide some visualization on the learned subgraph selector. From my perspective, these experimental results are sufficient to support the effectiveness of the proposed method. Theoretical Claims: I have carefully checked the proofs in the appendix. The overall proof process is correct. The only place that I am unclear is Eq. (21), where the authors seem to miss the term $\vert \mathbb{E}[L _c (\theta, D)] - \mathbb{E}[L _c (\theta, S)] \vert$. In other words, Eq. (21) holds only when $\mathbb{E}[L _c (\theta, D)] - \mathbb{E}[L _c (\theta, S)]=0$ holds. I encourage the authors to clarify this. Experimental Designs Or Analyses: I check the experimental designs and results in the main text. From my view, they are sound and sufficient to support the effectiveness of the proposed method. Supplementary Material: The authors do not provide any supplementary material. Relation To Broader Scientific Literature: The key contribution of this work is introducing the idea of learning invariant subgraph structure via pruning spurious edges and designing a simple and effective framework to achieve this. This could bring new insights to the graph learning community, including researchers who focus on OOD problem and others that focus on learning from graphs with noisy structure, namely there exists missing or incorrect edges. Essential References Not Discussed: From my view, there are no related works that are essential to understanding the key contributions of the paper, but are not currently cited or discussed in the paper. Other Strengths And Weaknesses: The strength of this paper is proposing a simple and effective method for graph-level OOD problem. The motivation is clear and reasonable, namely, learning invariant subgraph structure by pruning spurious edges. Also, the experimental results of the proposed method are also impressive. The weakness of this paper is that the proposed method is heuristic, since it simply use the combination of two functions to encourage the model to assign certain edges with small weights and align them to a small value. It is still uncleared whether the model could always correctly find those spurious edges and assign then with small weights. And, the theoretical analysis for the success of this method is also not sufficient. The authors attributes this to ERM, albeit I can not fully agree with them. Since GNN is adopted as the learning model and minimize the loss via stochastic optimization algorithms, the learned parameters are more likely to be a local optima rather than the global optima. From my personal understanding, the model may choose those local optima that spurious are assigned with small weights due to the implicit bias of the learning algorithm. Therefore, analyzing from the perspective of optimization algorithm could be a promising future direction. Other Comments Or Suggestions: Typo: According to Eq. (5), the overall objective is $\mathcal{L} = \mathcal{L} _{GT} + \lambda _1 \mathcal{L} _e + \lambda _2 \mathcal{L} _s$. However, in line 11 of Algorithm 1, the overall objective is $\mathcal{L} = \mathcal{L} _{GT} + \lambda _1 \mathcal{L} _e + \lambda _2 \mathcal{L} _{div}$. I think that the term $\mathcal{L} _{div}$ should be corrected as $\mathcal{L} _s$. Suggestion: There is not clear definition of the learnable subgraph selector $t(\cdot)$, and I can only know that it is a mapping $t\mathbb{R}^{n \times n} \times \mathbb{R}^{n \times D} \to \mathbb{R}^{n \times n}$. From my understanding, I think that $t(G)$ is defined by resetting each edge $e_ij$ in $\mathcal{E}$ via $e_{ij} \sim \text{Bernoulli}(p_{ij})$. I suggest the authors to clarify this. Questions For Authors: 1. Whether the model could always correctly find those spurious edges and assign then with small weights? If so, why? 2. The proposed framework seems to only apply for the graph-level OOD problem. It is possible to extend it for node-level or edge-level OOD problem? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and positive feedback! Please see below for our responses to your comments and concerns. --- > **Q1: The effectiveness of PrunE to assign low probabilit y weights to spurious edges** __Response:__ Thank you for raising this crucial point. Through extensive experiments, we find that the effectiveness of identifying and pruning spurious edges rely on two critical factors: - The size of the spurious subgraphs $G_s$. - The complexity of topological structures of $G_s$. As $|G_s|$ increases and the spurious structures become more intricate, the performance of all OOD methods tends to degrade. This is primarily because certain spurious substructures may exhibit strong correlations with the target labels, leading to misclassification of invariant substructures and overestimation of spurious edges. One such example is the *Motif-Base* and *Motif-Size* datasets, where the OOD performance of most methods drops significantly in Motif-size dataset due to the increased size of $G_s$ and the more intricate spurious subgraph topology. Similar to existing methods, PrunE will fail to assign low probabilty score to some spurious edges in these scenarios. However PrunE is also able to assign high probability scores to invariant edges in $G_c$, while previous methods that attemp to directly identify these edges tend to assign low probability to them. This ability is critical for the improved OOD generalization performance compared to prior approaches that attempt to identify invariant subgraphs directly. How to further identify and suppress spurious edges that are strongly correlated with target labels remains a challenging problem and represents a promising direction for our future research. > **Q2: Extending to node-level and edge-level OOD tasks** __Response:__ Thank you for raising this important point. We have conducted experiments using PrunE on the Cora-Word and Cora-Degree datasets, but the performance is comparable to that of ERM. Similarly, many OOD algorithms, such as IRM, VRex, and GroupDRO, that are effective in the vision domain and graph-level OOD tasks tend to perform on par with or even worse than ERM in node-level OOD settings, as evidenced in [1]. This discrepancy may arise from fundamental differences between the two problem settings. Specifically, in node-level OOD tasks, samples (i.e., nodes) are interconnected and thus not independently and identically distributed, whereas this issue does not arise in vision or graph-level OOD datasets, where each sample is treated independently. Due to these different characteristics, methods designed for graph-level OOD generalization and those targeting node- or edge-level OOD challenges are typically developed __separately__. In line with PrunE, most existing graph-specific OOD methods, such as DIR, DisC, CAL, GREA, GSAT, CIGA, and AIA, also focus soly on graph-level OOD settings. > **Q3: A new perspective from optimization** __Response:__ While our work is inspired by recent findings [2, 3] that ERM tends to learn both invariant and spurious features, we fully agree with the reviewer that analyzing the implicit bias and regularization effects from an optimization perspective in explaining why the learned solution may generalize well is a compelling direction. We thank the reviewer for highlighting this perspective and will consider it in our future research. > **Q4: Theoretical claims** __Response:__ Thank you for your careful review! As $L_c(\theta, \cdot)$ is defined as the loss computed on the invariant subgraph which remains unchanged under any distribution shift, it follows that $\mathbb{E}\left[L_c(\theta, D)\right] - \mathbb{E}\left[L\_c(\theta, S)\right] = 0$ under Assumption 1. We have added additional discussion in Appendix D.2 to further clarify this point. > **Q5: Implementation of subgraph selector $t(\cdot)$** __Response:__ We appreciate the reviewer’s careful review. The function $t(\cdot)$ is implemented as a GNN model (e.g., a 2-layer GIN) followed by an MLP that models independent edge weights $p_{ij}$, where each edge is treated as a Bernoulli random variable. We have added additional clarification regarding this implementation detail in Section 4 of the revised manuscript. > **Q6: typos** __Response:__ Than you for your careful review! We have corrected this typo in the pseudo-codes of Algorithm 1. --- We sincerely thank the reviewer for the careful review and insightful feedback. We hope that our responses have adequately addressed your concerns regarding our study. --- **References:** 1. Gui, et al., GOOD: A Graph Out-of-Distribution Benchmark, NeurIPS 2022 2. Kirichenko et al, Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. ICLR 2023 3. Chen et al., Towards Understanding Feature Learning in Out-of-Distribution Generalization. NeurIPS 2023
Summary: The authors introduces PrunE, a pruning-based method for enhancing out-of-distribution generalization in GNNs. The method remove spurious edges to address the challenge. Theoretical guarantees is provided and experiments show PrunE obtain better results compared with other methods. Claims And Evidence: Yes, claims are clear and convincing. Methods And Evaluation Criteria: Yes, methods and evaluation make sense. Theoretical Claims: Yes, the theoretical claims seem valid. Experimental Designs Or Analyses: Yes. The authors have tested the method on datasets with various size and types in different domains. Supplementary Material: No supplementary material and the source code hasn't been provided. This leads to issue regarding reproducibility. Relation To Broader Scientific Literature: The paper builds on existing graph OOD and causal learning literature but introduces a new paradigm of pruning spurious edges rather than directly identifying invariant subgraphs. This approach aligns with recent advances in causal learning, feature selection, and information bottlenecks but is innovative in the context of graph-based OOD generalization. Essential References Not Discussed: To my best of knowledge, no. Other Strengths And Weaknesses: Strengths: -The proposed methods achieve very good results compared with baselines. The experiments are intensive and reasonable. -The combination of graph size constraint and probability alignment as regularisation terms seem innovative Weaknesses: - The performance of PrunE relies on careful tuning of hyperparameters like the graph size constraint and ϵ-probability alignment. The sensitivity analysis in Figure 4 shows that inappropriate choices of η and K can significantly reduce performance. Is there a clear guidance of how to choose these parameters? - Only test with GCN and GIN. Why not testing on more GNN encoders with potentially better expressiveness? Other Comments Or Suggestions: N/A Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and careful review! Please see below for our responses to your comments and concerns. --- > **Q1: Reproducibility issue** __Response:__ As the official policy permits only figures and tables in the anonymous link, we have requested approval from the conference to share a link to our code. We are currently awaiting the approval and we have made the code available, along with a instruction on how to use the code. > **Q2: Hyperparameter sensitivity** __Response:__ Thank you for raising this important point. While inappropriate choices of $\eta$ and $K$ can indeed lead to performance degradation, we have found that setting $K=90$, $\lambda_1=10$, $\lambda_2=1e-3$, and $\eta \in \\{0.75, 0.85\\}$ yields consistently stable performance across both synthetic and real-world datasets, as discussed in lines 183 and 212 of our paper. This demonstrates a key advantage of PrunE over most existing graph OOD methods, which typically require extensive hyperparameter tuning. We will include additional discussion on the selection of hyperparameters in the _Hyperparameter Sensitivity_ section of the revised paper. > **Q3: Testing with more expressive GNNs** __Response:__ Thank you for this insightful question. We primarily adopted GCN and GIN, two GNN architectures with different levels of expressiveness, for the following reasons: - **Experimental convention.** Prior work in graph-level OOD generalization commonly adopts GCN and GIN as backbone architectures. Similarly, widely-used graph OOD benchmark datasets, such as GOOD [1], typically follow the same practice. - **Integration with the PrunE framework.** More expressive GNNs typically involve high-order message passing, however it is non-trivial to incorporate __high-order__ message passing into PrunE, particularly in the computation of $\mathcal{L}\_{GT}$ in Eqn. (6). This loss involves computing $f(t(G))$, where $t(G)$ down-weights spurious edges while preserving invariant ones, followed by a GNN encoder operating on the reweighted graph via first-order message passing. For high-order message passing, which involves aggregating information from __non-adjacent nodes__, it is unclear how to prune or control message flow in a principled way, as the pruning operation in $t(\cdot)$ do not naturally apply to higher-order interactions. - **Diverse designs of more expressive GNNs.** While many expressive GNNs go beyond first-order message passing, they do so in fundamentally different ways. For instance, PPGN [2] captures pairwise node interactions using outer products, while $K$-hop GNNs [3] aggregate messages over $K$-hop neighborhoods, and subgraph-based GNNs [4] extract a rooted subgraph for each node independently. These technical differences imply that a unified pruning mechanism may not apply, and different designs may require distinct treatments for integration with PrunE. For these reasons, we adopt GCN and GIN, both of which rely on first-order message passing and can be naturally incorporated into the PrunE framework via edge reweighting. Nevertheless, we fully agree with the reviewer that integrating more expressive GNN architectures into PrunE is a promising direction for our future research. --- We sincerely thank the reviewer for the careful review and insightful feedback. We hope that our responses have adequately addressed your concerns regarding our study. --- **References:** 1. Gui, et al., GOOD: A Graph Out-of-Distribution Benchmark, NeurIPS 2022 2. Maron, et al., Provably Powerful Graph Networks, NeurIPS 2019 3. Nikolentzos, et al., k-hop Graph Neural Networks, Neural Networks 4. Zhang, et al., Nested Graph Neural Networks, NeurIPS 2021
Summary: This paper introduces PrunE, a novel pruning-based method to enhance OOD generalization in GNNs. Unlike previous approaches that attempt to directly identify invariant subgraphs, PrunE focuses on pruning spurious edges, preserving the invariant subgraph more effectively. The method employs graph size constraints and ϵ-probability alignment to eliminate spurious edges. The authors provide theoretical guarantees and extensive empirical evaluations, demonstrating that PrunE outperforms existing methods. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I didn't check the details of the proof Experimental Designs Or Analyses: It would be beneficial to include an analysis of when and why PrunE fails Supplementary Material: No Relation To Broader Scientific Literature: Yes Essential References Not Discussed: I think the current discussion on related work is proper, but I am not quite familiar with the field. Other Strengths And Weaknesses: Strength: This paper introduces a novel paradigm focusing on removing spurious edges rather than directly identifying edges in $G_c$. By pruning spurious edges, PrunE preserves more edges in $G_c$ than previous methods, thereby improving its OOD generalization performance. The effectiveness of the proposed approach is validated via both theoretical and empirical analyses. Weakness: The method involves additional regularization terms and subgraph selection, which may introduce computational overhead. A scalability analysis on large-scale datasets should be provided. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments! Please see below for our responses to your comments and concerns. --- > **Q1: When and why PrunE may fail** __Response:__ Based on our empirical observations, the OOD generalization performance of PrunE, as well as other OOD methods, can be significantly influenced by: __i)__ the size of the spurious subgraphs $G_s$ and __ii)__ the complexity of their topological structures. As $|G_s|$ increases and the spurious substructures become more intricate, the performance of all methods tends to degrade. This is primarily because certain spurious substructures may exhibit strong correlations with the target labels, leading to misclassification of invariant substructures and overestimation of spurious edges. One such example is the *Motif-Base* and *Motif-Size* datasets, where the OOD performance of most methods drops significantly in Motif-size dataset due to the increased size of $G_s$ and the more intricate spurious subgraph topology. While this phenomenon also affects the performance of PrunE, our experimental analysis reveals that, despite occasionally assigning high probabilities to spurious edges (as seen in other methods), PrunE is also able to consistently estimate invariant edges with high confidence. This ability is critical for its superior OOD generalization performance relative to prior approaches. Nonetheless, how to identify and suppress spurious edges that are strongly correlated with target labels remains a challenging problem and represents a promising direction for our future research. > **Q2: Computational efficiency and scalability** __Response:__ We thank the reviewer for raising this point. In the context of graph-level OOD generalization, each sample corresponds to an individual graph, typically containing at most a few hundred nodes. This contrasts with node classification tasks, where each sample is a node within a potentially massive graph comprising millions of nodes. As such, scalability is generally not a major concern for graph-level classification datasets. To further address the reviewer’s concern regarding computational efficiency, we conducted additional experiments to evaluate runtime and memory overhead on two datasets. __Table 1: Memory consumption of vairous methods (in MB)__ | | Motif-base | Molbbbp | |:---:|:---:|:---:| | ERM | 40.62 | 32.43 | | IRM | 51.76 | 36.19 | | VRex | 51.52 | 35.92 | | GREA | 103.22 | 76.28 | | GSAT | 90.12 | 58.02 | | CIGA | 104.43 | 72.47 | | AIA | 99.29 | 81.55 | | PrunE | 74.15 | 61.07 | || __Table 2: Runtime of various methods (in seconds)__ | Method | Motif-base | Molbbbp | |:---:|:---:|:---:| | ERM | 494.34±117.86 | 92.42±0.42 | | IRM | 968.94±164.09 | 151.84±7.53 | | VRex | 819.94±124.54 | 129.13±12.93 | | GREA | 1612.43±177.36 | 262.47±45.71 | | GSAT | 1233.68±396.19 | 142.47±25.71 | | CIGA | 1729.14±355.62 | 352.14±93.32 | | AIA | 1422.34±69.33 | 217.36±11.04 | | PrunE | 501.62±7.64 | 133.35±3.47 | || As shown in the tables above, compared to most graph-specific OOD methods, PrunE exhibits advantages in computational efficiency, as it introduces only two lightweight regularization terms on the subgraph selector. In contrast, many existing methods rely on more expensive operations such as data augmentation or contrastive learning. This highlights the computational efficiency of our approach. --- We sincerely thank the reviewer for the careful review and insightful feedback. We hope that our responses have adequately addressed your concerns regarding our study.
null
null
null
null
null
null
A Simple Model of Inference Scaling Laws
Accept (poster)
Summary: This paper investigates how neural models' inference performance scales with multiple attempts, particularly in the context of LLMs. The study introduces a straightforward statistical framework based on memorization to explore the relationship between inference attempts and success rates, measured by the coverage or pass@k metric. This metric reflects the probability of obtaining the correct answer across repeated attempts. The authors derive an "inference loss" that shows a predictable decay in error with increasing trials, linking it to prompting costs. They validate their model through empirical experiments with LLMs on reasoning tasks and a generative VAE model, confirming that their theoretical predictions align with observed data. The framework isolates the effects of inference scaling and proposes it as a foundational element for optimizing the trade-off between training and inference costs to enhance overall model performance. Claims And Evidence: Some of the claims are not supported. Please refer to the questions for more details. Methods And Evaluation Criteria: The proposed method and evaluation appear to make sense. Theoretical Claims: I ensure that the proof logic is correct. Experimental Designs Or Analyses: The existing experiments in this paper are sound, but some of the claims are not supported by the experiments. Please refer to the questions for more details. Supplementary Material: I checked all the supplementary material. Relation To Broader Scientific Literature: This problem is timely, as o1 and r1 are among the most popular research topics recently. The proposed model is simple and elegant. If the authors can address the concerns for this model, I believe it will inspire future reasoning models. Essential References Not Discussed: I think all important references are included. Other Strengths And Weaknesses: **Strengths**: 1. The topic is timely and aligns with the current research trend in reasoning models. Pass@k is an important metric in these models, particularly for MCTS or exploration in RL. 2. The proposed model is simple and elegant. 3. It is encouraging to see that the proposed model performs well in the experiments, as evidenced by Figures 1(a), 2, 6(a), and 7(b). **Weaknesses**: I have several concerns about the proposed model: 1. I have several questions regarding the practical aspects of data generation models. (Please refer to questions 1-4) 2. Some of the experiments do not align well with the scaling laws. (Please refer to questions 5 and 6) Other Comments Or Suggestions: Some typos: 1. In Eq. (1), $E_i$ is not defined; 2. In line 305, "asmyptotes" should be "asymptotes". Questions For Authors: 1. This paper focuses on the inference scaling laws of LLMs. However, the data generation model appears to be not relevant to LLMs. I suggest that the authors discuss why the practical problems in LLMs that require inference scaling laws should follow the Hutter model. 2. It would be interesting to investigate whether the proposed model is better than a simple power law scaling. An alternative model could be $pass@k = \mathcal{A} \times (\text{Sigmoid}(\alpha k) + \beta)$. Can the proposed model fit the curve better than this simpler model? 3. To fit Figure 1(a) with Equation (7) and Figure 2 with Equation (9), should we use the same set of parameters ($\mathcal{A}, \alpha, \beta$) or different sets? According to the model, the same set of these numbers should be used given a model. 4. In Figure 1(b), it is unclear how well the Beta distribution fits the practical data. The same issue arises for Figure 4(c). 5. In Figures 6(b) and 7(a), only a small proportion of the data fits the theoretical curve well. Why does this happen? 6. In Figure 6(a), the "emergent behavior" occurs when $\epsilon=0.1$, but not for other values. Why is this the case? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer i2X4, Thank you for your careful and positive review of our paper, finding that our proposed model is simple and elegant. We hope that our responses will alleviate your concerns. Below, we address the issues you raised. **Comments and Suggestions** 1) Thank you for pointing this out, we will complete L87: *For $n$ training samples, the expected single feature error $E_i$ is*. 2) Thank you for finding this typo, it will be fixed in the revised version. **Weaknesses and Questions** 1) *Data generation process* - While it is true that the data generation process of LLMs may seem very different from the simple memorizing model, the claim of the paper is not that the simple model perfectly captures every detail of the LLM inference process, but rather the opposite – that these details are not crucial to address inference scaling, as it is often true with pre-training scaling laws [1,2]. The only requirements from the LLM in order to fit our assumptions are that the model is sufficiently large, or has sufficient capacity to (approximately) store its training data, and that inference is done by an imperfect sampling process. The specific task we chose – which is recovering exactly the training data itself, is not what LLMs do, but it is a proxy for the task of “recovering the right answer from the training data”. Meaning that if the answer, or the path to the answer (a sequence of retrievals from its memory) existed in the training data, this model is able, with some error probability, to reach this answer. This approach explains why this simple model captures very different scenarios (LLM and VAE). In order to understand the precise connection between our model and LLM performance, it would be wise to study the internal representations of LLMs, and look for “memorizing modules”, possibly in specialized attention heads, as was suggested by reviewer kFgD, and will be part of the focus of our future works. We will better explain this point in the main text of the revised version, if the reviewer finds this answer acceptable and useful in understanding our goal. 2) *Other functional fits to the data* - We agree with the reviewer that ablation with simple functions could be useful, we will include a short appendix considering several functions of the suggested type to show that they cannot be used to capture the full behavior of the model, typically sigmoid type functions are somewhat able to capture the small $k$ regime, but never both limits. We provide a preliminary figure in https://imgur.com/kTsDM0R . 3) *Similar parameters for different descriptions* - The parameters should not be the same, but there exists a simple mapping between them that can be derived from the large and small $k$ limits. 4) *Beta fit to practical data* - Thank you for this comment. The two figures are conceptually different. Regarding Fig 4(c), we will include the empirical curves, which match very well the asymptotic behavior predicted by our model (https://imgur.com/a/8Ftb76R). Regarding Fig 1(b), this is a prediction made by our model that requires further study to properly interpret. In essence, this figure shows the "difficulty" distribution for the different models, on the specific set of maths questions. It does not fit any empirical data, but should be taken as the interpretation of the $\alpha,\beta$ parameters. The goal of the figure is to show that different models can perceive the same data at different difficulty levels, and a detailed study of this point could lead to new inference sampling techniques, based on the ratios of easy and difficult questions. We hope this explanation clarifies this point and would be happy to discuss further. 5) *Figures 6(b) and 7(a) fitting the theoretical curve* - Figs 6(b) and 7(a) are meant to serve as qualitative evidence for the effective correlation between trials, rather than an exact characterization. A priori, the inference attempts themselves need not be correlated, but we see that there are effective long range correlations between different trials from both the reconstruction error 6(b) and its eigenvalues 7(a). We see that it is sufficient that only a bulk of the eigenvalues conform to a power-law decay to have good predictions for the pass@k metric. We will explain this point more clearly in the revised text. 6) *emergent behavior at specific threshold value* - The logic is that for larger threshold values the model must have smaller reconstruction error, and so require fewer inference attempts to succeed. The "emergence" is just a feature of the functional form of Eq. 7. It would be interesting to analyze this model in more detail and consider whether one can define "emergence" as the point where the function changes from concave to convex perhaps, and study the internal structure of the model at these points. **References:** [1] Maloney et al. https://arxiv.org/abs/2210.16859 [2] Bahri et al. https://www.pnas.org/doi/abs/10.1073/pnas.2311878121 --- Rebuttal Comment 1.1: Comment: My concerns have been addressed and thus I keep the positive score. For rebuttal 2, I think there are many other simple models in statistical mechanics and I recommend the authors to try them.
Summary: The paper introduces a statistical framework to analyze the scaling laws for LLMs inference, particularly addressing how model performance (pass@k) improves with repeated inference attempts. The authors present 2 models, one assumes samples differ in difficulty, modeled via a Beta distribution; the other considers correlated inference attempts through a power-law correlation struction. Empirical validation is conducted on large language models (LLMs) and a Variational Autoencoder (VAE) trained on image reconstruction tasks, showing strong agreement between theoretical predictions and empirical data. Claims And Evidence: The claims are supported by empirical experiments. The authors demonstrate that their proposed analytical framework for inference scaling match the empirical "pass@k" curves observed in multiple LLM experiments and VAE reconstructions. The findings indicate that inference performance improves predictably with repeated attempts. Methods And Evaluation Criteria: The proposed methods are reasonable and suitable for capturing the inference scaling phenomena in LLMs and generative models. pass@k is a reasonable evaluation criteria for measuring model's performance in math reasoning and coding problems. Theoretical Claims: The paper provides analytical derivations for the inference scaling behavior under specific assumptions (e.g., independence and correlation of trials). The correctness of these derivations looks mathematically sound. Experimental Designs Or Analyses: Experimental designs for both the LLM-based mathematical tasks and the VAE reconstruction tasks are sound and appropriate. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: I am not sure about its relation to broader scientific literature. Essential References Not Discussed: I am not sure about the essential references not discussed. Other Strengths And Weaknesses: Strengths: The problem this work is trying to address is of practical significance. The findings offer valuable insights into practical strategies for balancing computational costs and model performance by adjusting inference attempts. Weakness: - The strong assumptions regarding independence and perfect verification limit the model's direct applicability in less controlled scenarios. - Using more diverse models (other than small VAE) could make the findings more persuasive. Other Comments Or Suggestions: N/A Questions For Authors: Could you clarify if your scaling law predictions hold when applied to tasks beyond mathematical reasoning or VAE reconstruction, especially for the tasks that cannot be verified easily. How sensitive are your conclusions to the choice of parameters α and β in the Beta distribution? How will different choices of these parameters affect the estimation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer FFcT, Thank you for your positive appraisal of our submission. We’re glad that you found valuable insights and potential practical applications, which were our goals. We address the weaknesses raised, as well as questions below. **Weaknesses** 1) *strong assumptions* - We completely agree that both the independence assumption and the perfect verification limit are strong assumptions. However, given that our paper is the **first** to propose a model for inference scaling (as far as we know), it is natural to begin with a limiting, ideal setting and extend it in future works. It is rather surprising that even under these strong assumptions, the simple memorizing model captures the inference scaling behavior of real models as well as it does. We would like to kindly bring to the reviewer’s attention the fact that the independence assumption itself is also discussed in the main text, and the “effectively correlated trials” section (Sec. 4.2) is meant to describe this effect, if we understand the reviewer correctly. We apologize if this point was not sufficiently clear, and will try to highlight it further in the revised version. Furthermore, we are sure that relaxing the perfect verification assumption will be a very interesting avenue for future works, since verification methods is a very broad topic all in itself. 2) *More diverse models* - We agree in principle that adding more diverse controlled experiments could extend the scope of the paper, but we believe that the current evidence on LLMs and VAE reconstruction supports our predictions. The VAE experiments are meant as intermediate step, that shares some of the complexity of LLMs without the full pipeline. If the reviewer has a particular experiment in mind that would make sense within the “memorization-inference” setup, which is neither VAE nor the LLMs, we would be happy to test our predictions on it. **Questions** 1) *Scaling beyond tasks in the main text* - The experiments given in the main text which consider LLMs on reasoning tasks and the VAE reconstruction are different, but share some common features. In particular, in both tasks, the model is asked to “learn” the training data distribution and perform sampling (as opposed to regression for instance). In that sense, the exact details of the architecture, model and task are not particularly important, as long as the model can be thought of as a two component “memorization” module and “inference” module. Therefore, our predictions are quite universal, as long as the data is not uniformly “easy”, and so we believe our results should extend to other generative settings, for instance to diffusion models. 2) *Sensitivity to $\alpha,\beta$* - The question of sensitivity to the parameter choices is a bit unclear to us, since the conclusions (namely the inference scaling laws) are analytically given in terms of $\alpha,\beta$, so the exact dependence on the parameters can be characterized. Could the reviewer please clarify their meaning? If the question is of interpretation, then different choices of $\alpha$ correspond to a different small $k$ behavior, since as we explain in the main text, the average “difficulty” of the samples is $\frac{\alpha}{\alpha+\beta}$, and so larger $\alpha$ values would imply a slower improvement with $k$ for a smaller number of trials, while $\beta$ dictates the large $k$ behavior of the inference loss/pass@k, meaning how difficult it is to improve performance by increasing $k$ at a large number of inference attempts, where larger $\beta$ means greater scaling improvement. We hope that our replies are sufficient to raise the reviewer’s confidence in our work, and potentially accept the paper. We welcome any further questions and comments.
Summary: This paper studies the paradigm of inference scaling and tries to identify the functional form that can help explain predict performance therein, i.e., building a scaling law with respect to inference budget (e.g., k in pass@k metric). The authors consider one of the simplest models for pretraining scaling laws from literature, proposed by Hutter (2021) (and originally dating back to several people in 1990s, including Amari), that considers a hypothesis class wherein the data has been perfectly memorized by the network. Assuming this perfect memorization, the rate of loss reduction can be shown to yield a power law under certain assumptions on the data distribution. The authors essentially extend this model to inference scaling, finding really good fits to the empirical results. Claims And Evidence: The claims are well-backed with evidence: the empirics are thorough and consistency with theory is really nice. I really liked the VAE results---they were honestly the most exciting confirmation of the model and I'd have loved to see them emphasized more in the writeup. Methods And Evaluation Criteria: Both approaches and results make sense, and I do not have any complaints. Theoretical Claims: Not quite applicable, since the model is an extension with fairly reasonable and well-motivated assumptions on the fitting parameters. Experimental Designs Or Analyses: Yes, see above. Supplementary Material: Yes, whenever figures were referenced, I ensured to check them in the appendix. Relation To Broader Scientific Literature: To my knowledge, this paper offered the first theoretical model for inference scaling. A very loosely related paper that comes to mind is by Park et al. (https://arxiv.org/abs/2501.00070), which in fact shows sample complexity of inference scaling for a belief update task (unlike the ones considered in this paper), is worse than a memorization based model would suggest, but nevertheless highly predictable. I'd actually be curious to hear authors' thoughts on that paper. Essential References Not Discussed: Related work is fairly described in my opinion. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: Implementation: While the authors motivate their model of scaling with the memorization ansatz that derives from Hutter's work, it would have been better to see a more mechanistic argument for why we should expect such a model to transpire for a pretrained Transformer model. Is it possible the results are a consequence of specialized attention heads basically operating like perfect memorization modules? If not, then would we expect the scaling to be very different for a different architecture such that the proposed memorization model breaks down there? For ex., would inference scaling with an RNN not work. (I realize inference scaling with RNNs has not been shown, but I'm mostly asking for curiosity's sake here.) Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer kFgD, Thank you for carefully reading our manuscript, we are glad that you found our theoretical analysis sound and our empirical results well backed. We would like to address the various points you raised below. **Regarding Evidence** We agree that the VAE results are interesting, and are happy to see that they were appreciated. To be honest, having discussed with other researchers and due to the broader interests of the community, we preferred to highlight the functional matching to the LLM performance rather than the more controlled VAE setting. If you believe this is sufficiently interesting to the community, we will try to highlight these results a bit more in the revised manuscript. **Regarding Broader Scientific Literature** As far as we know, this is indeed the first theoretical model for inference scaling. From a brief reading of the Park et al. paper, we agree that there are interesting connections there, though their work is for in-context learning and ours was tested on inference without further context. It is not unreasonable to think that one way to interpret the in-context update to the representation as somewhat equivalent to resampling from the memory of the pre-trained model during inference. Perhaps in the ICLR setting our "error probabilities" are not fixed, but updated according to the context provided. We thank the reviewer for bringing this to our attention, and will add it to the related works. If the reviewer has more insights to share along these lines during the discussion period, possibly regarding something we've missed, we'd be happy to discuss it further. **Comments/Suggestions** This is an extremely interesting question, and we agree that developing a mechanistic understanding of the memorization module ansatz would be a valuable complement to our simple model. Our goal in this work was to introduce a first model that provides a clear explanation of the key inference observation. Understanding how this final result emerges from training would be a natural extension. As a next step—currently a work in progress—we aim to explore the connection between pre-training scaling performance and inference scaling in solvable models, which we believe exist. This direction may align with the reviewer's intuition, as memorization must occur during training, and we may observe different inference scaling behaviors depending on whether the "memorization assumption" holds or breaks down. Moreover, the simple model presented here seems to give quite universal results, but it still might be that there are several "universality classes", where different models might fall into. Finally, while we have not yet considered RNNs, their study should, in principle, be feasible, and would very likely lead to different scaling behaviors, at least in some cases (if not considering SSMs perhaps). We appreciate this valuable suggestion and will certainly investigate it in future work. We hope that our replies have affirmed your confidence in our work, and potentially lead to accepting the paper. We welcome any further questions and comments.
Summary: The paper proposes to study scaling laws for inference in a restricted setup where the model can potentially memorize the training dataset. The paper also shows that the theoretical predictions match empirical results on mathematical reasoning tasks for LLMs. Claims And Evidence: The paper makes several assumptions about the setup (e.g. the model can memorize all the samples upto the model capacity (line 101 column2) or that the model makes an incorrect prediction with some probability or that Beta distribution can be used to model the distribution of difficulty of data points. Using these assumptions, the paper derives scaling laws and the theoretical results are shown to match the empirical results. Methods And Evaluation Criteria: The paper largely focuses on theoretical results and while they could have used more complex dataset for their empirical results, it should be fine given the focus on theory. Theoretical Claims: I do have some questions about assumptions that the authors make (more of this in "Questions For Authors"). I could broadly follow the theoretical claims but I did not attempt to re-derive final expressions in different equations (e.g. equation 7, equation 9, equation 10). I should also say that I will rely quite heavily on the other reviewers to fully appreciate the theoretical contributions (as this is not my regular area of work) so the authors should focus on addressing their questions and concerns first. Experimental Designs Or Analyses: Given the focus on theoretical results, the experiments, while not exhaustive, seem sufficient to back up the main claims. Supplementary Material: yes - all Relation To Broader Scientific Literature: I am not much familiar with this literature but the paper seems to build on previous works, in terms of scaling law analysis and the choice of the memorization setup. It also improves on the existing work by focusing more on the inference scaling laws and incorporating the number of inference steps in their analysis. Essential References Not Discussed: I am not much familiar with this literature but the paper seems to cite relevant related work. Other Strengths And Weaknesses: I found the paper (and some captions) to be a bit dense to understand. e.g. The caption for Figure 1 was quite dense. I found the experiments with Llama-3-8B to be well motivated and easy to follow.The paper makes both theoretical and empirical contributions and should be useful for the community Other Comments Or Suggestions: 1. The authors should include some motivational real-world examples where the memorization assumption holds (in approximation) 2. In line 425 (second column), "Similar to ( Ringer..)" should be "Similar to Ringer.. " Questions For Authors: 1. In section 4, why is beta distribution a good choice ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 4E5M, Thank you for your positive reading of our manuscript. Below, we address your comments/questions: **Weakness:** *The caption for Figure 1 was quite dense* – In the revised version, we will shorten the caption, and move some of its content to the main text, namely L131 – 133 could be moved. Do you believe this will make the caption easier to understand? **Comments/Questions:** 1) *The authors should include some motivational real-world examples where the memorization assumption holds (in approximation)* - As per your suggestion, we will include an appendix which contains a very simple neural network classifier which can memorize (approximately) its training data, combined with an inference model which is allowed to make mistakes based on the Beta distribution. This way we separate the memorizing assumption from the inference assumption. 2) Thank you for the correction, we will fix the mistake. **Questions:** 1) Regarding the Beta distribution - The Beta distribution is a particular choice made in this paper, but it is certainly not unique, as we point out in the main text: *One way to model…*. The reason for this choice is the fact that the inference behavior differs for small $k$ and large $k$, and so at least a two-parameter distribution is required. For instance, using the classical Zipf law type distribution with a single decay parameter $\alpha$ would only capture one of these limits. Other two parameter distributions would also be acceptable, but the interpretation will remain the same: the model perceives some inference tasks as “difficult” and others as “easier” depending on the two parameters of the distribution. We hope that our replies are sufficient to raise the reviewer’s confidence in our work, and potentially accept the paper. We welcome any further questions and comments.
null
null
null
null
null
null
Riemannian Diffusion Adaptation for Distributed Optimization on Manifolds
Accept (poster)
Summary: The paper concerns online distributed optimization for data on Riemannian manifolds. The authors propose an algorithm for distributed optimization between a number of agents with combination steps in the tangent spaces of the current values at the agents. The paper contains a theoretical analysis of the algorithm, and experimental validation on synthetic and real data. ## update after rebuttal I have maintained my initial score as it still adequately reflects my evaluation of the paper. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: I did not check the proofs for correctness. My overall impression from reading the main paper is that the exposition is correct Experimental Designs Or Analyses: I did not find any issues Supplementary Material: no Relation To Broader Scientific Literature: while I am not specifically aware of the literature on e.g. decentralized optimization, I believe the literature is adequately surveyed Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: - well-written paper. Clear explanation of the chosen methodology - well-chosen methodology to solve the problem - thorough theoretical analysis of the proposed algorithm Weaknesses: - I believe the algorithm is a fairly straight-forward generalization of its Euclidean counterpart. This is not necessarily a bad thing, it just mean that the algorithm itself does contribute significant new ideas. This is partly balanced by the theoretical analysis that has to account for the geometry Other Comments Or Suggestions: no other comment Questions For Authors: no questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our work and pointing out the strengths and weaknesses. In the following, we provide replies to weaknesses: We would like to emphasize that the proposed work is not a trivial generalization, even though there is an Euclidean counterpart to the Riemannian diffusion adaptation algorithm. For example, when combining the local estimates lying on manifolds, it is no longer possible to use a simple linear combination of the local estimates as in the Euclidean counterpart, as we do not assume a vector space structure in the manifold. We tackled this issue by proposing a one-step Riemannian gradient descent over a network agreement loss function to achieve information exchange during the learning and adaptation process. As you also mentioned, the theoretical analysis (both in terms of the network agreement and non-asymptotic convergence) involves fundamentally new ideas since our analysis is fully geometric. For example, in the network agreement analysis, we cannot use adjacency matrix decomposition as in the Euclidean counterpart due to the combination step in our case being *non-linear*. Thus, the analysis is rendered significantly more difficult since the network centroid cannot be computed using a simple/linear expression as in the Euclidean case. Therefore, we propose a novel framework to study the network agreement through the evolution of the penalty term $P(\boldsymbol{\phi}_t)$. In the non-asymptotic convergence analysis, curvature-related terms also make traditional techniques used in Euclidean spaces unfeasible, requiring careful design of a Lyapunov function (please see more details in *[the response to reviewer bCcG](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=IqmTIhR1dX)*).
Summary: This paper aims to solve the online decentralized optimization problem on the general Riemannian manifold for multi-agents. The proposed Riemannian diffusion adaptation method contains two stages: an adaptation step and a combination step. It theoretically proves that all agents will approximately converge to a network agreement with non-asymptotic convergence after sufficient iterations. The experiments on two typical manifolds for PCA and GMM show that the proposed method significantly outperforms the non-cooperative, DRSGD and ECGMM methods. Claims And Evidence: Yes. The authors provide detailed and complete proofs of the theorems. Methods And Evaluation Criteria: The proposed method is evaluated on synthetic data and real data in application of distributed PCA and GMM inference. However, more complex situations and applications are lacking discussion. Theoretical Claims: Yes, I have checked most proofs for lemmas and theories, especially for Lemma 5.11 and Theorem 5.15. Experimental Designs Or Analyses: If for an intuitive demonstration, the example in distributed PCA and GMM inference on synthetic and real data is great. But for wide application aspects, the experiments need to conduct more complex situations in real word benchmarks. Supplementary Material: I have reviewed the supplementary materials, especially the proofs of the lemmas and theorems. Relation To Broader Scientific Literature: The previous works on decentralized optimization are in Euclidean space. When extending to Riemannian manifolds, some previous works construct functions that map the points from the manifold to Euclidean space. But this paper proposes a more direct method, avoiding transformation between manifold and Euclidean space, and it can be proved theoretically that the algorithm will be non-asymptotic convergence. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: 1. The proposed methods are concise and clear. The two stages: for the adaptation step, the idea is to compute a local solution for each agent. In the combination step, the idea is making agreements with all agents. 2. The whole idea is very simple, and the theoretical analysis is rich. 3. Compared to previous works, the proposed method avoids transforming points to Euclidean space and back to manifold repeatedly. Cons: 1. The two examples and applications are all simple cases, it is hard for readers to accept that the proposed method will have wide application. 2. Here are 4 assumptions in this paper, I think it lacks claims that these assumptions will fit most common situations in real applications? 3. The computational complexity of the proposed method is not discussed, in the high-dimensional case, will the computational cost be very high? Other Comments Or Suggestions: No other comments. Questions For Authors: 1. In experimental results, why do the Riemannian centralized methods outperform the proposed methods so much? What are the advantages of the proposed method compared to Riemannian centralized method? 2. The authors said that the proposed method is a strategy over general manifolds. However, in the general manifold case, e.g., the manifold embedded in $\mathbb{R}^{m+n}$ defined by implicit function $p=(u, v) \in \mathbb{R}^{m+n}$ where $v=f(u)$ with $f:\mathbb{R}^m\rightarrow \mathbb{R}^n$, how does the proposed Riemannian diffusion Adaptation process? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reading our work (especially in checking the proofs) and offering constructive suggestions. In the following, we provide clarifications and answers to your comments and questions. ### **Replies to weakness:** ***Conduct more experiments for wider application aspects:*** Thank you for this comment. We highlight that while PCA and GMM inference are key problems in machine learning, the proposed method is fairly general and applicable to a wide set of problems (the revised manuscript will be modified to better showcase this). Moreover, one of the main contributions of our paper is the theoretical analysis, and we find that the experimental validation in our paper is still in line with those performed in related work (see, e.g., Bonnabel, 2013; Zhang & Sra, 2016; Chen et al., 2021; Vlaski & Sayed, 2021; Li & Ma, 2023). While additional experiments with complex real-world benchmarks would definitely improve the paper, unfortunately, we were not able to perform such experiments in the comparatively short time available during the rebuttal period. ***Claim the fitness of the assumptions in real applications:*** Thank you for this insightful comment. These main assumptions made in the paper are indeed standard in Riemannian and decentralized optimization algorithms, particularly in what concerns their theoretical analysis, see e.g., (Bonnabel, 2013; Zhang et al., 2016; Tripuraneni et al., 2018; Afsari, 2011; Chen & Sayed, 2012; Sayed et al., 2013; Afsari et al., 2013). A necessary assumption for the derivation of the algorithm is the smoothness of the risk functions, since the proposed algorithm relies on gradients, while the remaining assumptions are used in the theoretical study. Moreover, an assumption that is less frequently satisfied is geodesic convexity. Considering the practical examples studied in the paper (PCA and GMM inference), the cost function of PCA formulated on the Grassmannian manifold has been recently shown to be geodesically convex [R1]. On the other hand, for GMM inference, the log-likelihood has been shown to be geodesically convex for the case of a single Gaussian (Hosseini & Sra, 2015), but not necessarily when multiple Gaussians are considered. From the example of GMM inference, we find the proposed algorithm itself can work even in some situations when not all these assumptions are satisfied. We will update the revised manuscript to discuss the applicability of the assumptions in our work to practical problems. [R1] Alimisis, F., and Vandereycken, B. Geodesic convexity of the symmetric eigenvalue problem and convergence of steepest descent. Journal of Optimization Theory and Applications, 203(1), 920-959, 2024. ***Discussion for the computational complexity:*** Thank you for this useful comment. While the computation complexity will scale with the dimension of the manifold, the proposed approach is a first-order gradient-based method, and its complexity remains as low as in other first-order optimization approaches such as (Bonnabel, 2013; Zhang & Sra, 2016; Tripuraneni et al., 2018). For a detailed discussion on the computational complexity of the proposed algorithm, please refer to *[the response to Reviewer xKFw](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=Lgg3fpLQRg)*. ### **Replies to questions:** ***Why does the Riemannian centralized method perform so much better? What are the advantages of the proposed method?*** The centralized solution achieved the best performance, as it can access all the data over the whole network at every iteration. The proposed algorithm is fully decentralized, where each agent uses only locally observed data to update its local estimate and exchange information only among neighboring agents. Although the proposed algorithm has lower performance compared to the centralized method, it can be computed in parallel on multiple agents. We will add more details in the revised manuscript to better explain the differences and advantages. ***How does the proposed method work on a manifold defined through an implicit function?*** The proposed algorithm is designed for general Riemannian manifolds, but it requires the computation of retractions and Riemannian gradients, as in other Riemannian optimization works (see, e.g., (Boumal, 2023; Zhang & Sra, 2016; Bonnabel, 2013), to name a few). When mentioning "general manifolds", we aimed to differentiate our approach from extrinsic works that focus on specific examples, such as Stiefel manifolds (Chen et al., 2021; Wang & Liu, 2022). For a manifold defined through an implicit function, such a Riemannian structure and the required operations would have to be derived to make those strategies applicable. We will clarify the "general manifolds" statement in the revised manuscript.
Summary: This paper proposes a decentralized optimization algorithm on manifolds that is termed Riemannian diffusion adaptation algorithm. The proposed algorithm follows two steps. First, in the adaptation step, each agent updates its local solution estimate on the manifold using Riemannian stochastic gradient descent (R-SGD). Second, in the combination step, agents share and combine their estimates in the tangent space. A theoretical analysis under a constant step size shows that the algorithm achieves network agreement with high probability and converges to a neighborhood of the optimal solution. The method is demonstrated on online decentralized PCA and GMM inference, with experiments on both synthetic and real-world data showing its effectiveness. Claims And Evidence: The claims are adequately supported by theoretical and experimental results. Methods And Evaluation Criteria: The methods and evaluation criteria are adequate. Theoretical Claims: I checked the derivations and proofs, and they seem correct. Experimental Designs Or Analyses: The experimental design and analyses are sound and valid. Supplementary Material: I checked the proofs in A and B and only skimmed through C. Relation To Broader Scientific Literature: The paper addresses an important and central problem. Essential References Not Discussed: The references are adequate. Other Strengths And Weaknesses: Strengths: - The addressed problem is important and central. - To the best of my knowledge, although simple and straightforward, the proposed algorithm is new - The theoretical analysis presents important properties of the algorithm - The experiments nicely demonstrate the benefits of the algorithm compared to baselines in two classical applications. Weaknesses: - The introduction could be improved (see below) - A discussion on the computational complexity of the algorithm is missing. - A discussion on the limitations is missing. For example, how well does the algorithm scale with the number of agents? How is the exp map (or retraction) computed on manifolds without a closed-form expression? Other Comments Or Suggestions: - Page 1, right column, lines 011-025: The discussion is vague and should be more concrete. Many terms are mentioned (e.g., embedding and Whitney embedding) without any introduction. To clarify and strengthen the motivation, this paragraph should be re-written. - Page 1, right column, description of contributions 1: the paragraph contains many terms whose meaning is not completely clear at this stage (adaptation strategy, fully intrinsic, general manifolds, a sequence of efficient adaptation and combination steps). - Page 1, right column, description of contributions 2: same as in contribution 1, the paragraph contains many unclear terms (network agreement, decreasing geodesic distance, curvature-dependent, non-asymptotic convergence, proper design of Lyapunov function. - Sec. 5 contains many existing results (up to Cor. 5.5) - consider separating the old results from the new results. - In Sec. 6.1 - consider putting the definition of MSD in a non-inline equation for better emphasis. Questions For Authors: - In Sec. 7 - only one network is considered. Why only one? How does the algorithm scale with the number of agents K? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reading our work (especially in checking the proofs) and offering constructive suggestions. In the following, we provide clarifications and answers to your comments and questions. ### **Replies to weakness:** ***Improve the introduction and technical section:*** Thank you very much for the suggested improvements to the introduction and technical sections. We will provide sharper definitions on Page 1 and add subsections in Section 5 to clearly separate existing results from previous works and our technical results in the revised manuscript. ***Discussion for the computational complexity:*** Thank you for this insightful comment. The computational complexity of the proposed algorithm involves two contributing terms. The first is the cost of a local adaptation step at each agent $k$ (i.e., Riemannian SGD on $J_k$), which is denoted by $T_J$. The second is the cost of the combination step, which involves a gradient step over the loss function $P_k$ that scales linearly with the number $N_{{\rm neigh},k}$ of neighbors connected to node $k$ in the graph (that is, with the number of nonzero elements in the coefficients $c_{k\ell}$), which we represent as $N_{{\rm neigh},k}\cdot T_{P}$, where $T_P$ is the cost of computing the Riemannian logarithm operator. $N_{{\rm neigh},k}$ is also known as the *degree* of the vertex $k$ in the graph $\mathcal{G}$. Thus, for each agent $k$, we obtain a complexity of $T_{J} + N_{{\rm neigh},k}\cdot T_{P}$. Compared to a noncooperative setting, we have an overhead cost of $N_{{\rm neigh},k}\cdot T_{P}$, which is a function of both on $T_P$ (depending on the manifold) and on the number of neighbors connected to node $k$ (which depends on the network topology). This allows us to understand how the complexity scales with the number of agents $K$. In the case where the number of neighbors to each node (i.e., their degree in the graph) is constant, the complexity does not increase with the number of agents. On the other hand, in the worst case scenario of a fully connected network (where each vertex has degree $K-1$, being connected to all other vertices), the complexity scales linearly with $K$, with a coefficient equal to $T_P$. We will include this discussion on the computational complexity of the algorithm in the revised manuscript. In addition, we will calculate the complexity values of $T_J$ and $T_P$ (in terms of the required number of operations) for the PCA and GMM problems discussed in Section 6 of the paper and include them in the revised manuscript. ***Discussion for the limitations:*** Thank you for this insightful suggestion. We summarize such a discussion in the following, and will include it in the revised manuscript: - Scaling with the number of agents: the discussion on the computational complexity (explained in more detail just above) shows how the complexity scales with the number of agents $K$. In particular, the computation complexity of the combination step of the algorithm scales according to the *degree* (number of neighbors) of the vertices of the graph. In the worst case of a fully connected network, this contribution scales linearly with the number of agents $K$. - Manifolds without closed form expressions: manifolds without closed form expressions for retractions, or for the Riemannian gradient, pose challenges to the implementation of the proposed algorithm, as such operations have to be approximated numerically in some way. However, we highlight that this limitation also holds for most existing Riemannian optimization algorithms, and is not specific to our work. - Theoretical analysis: one limitation of the theoretical analysis is that it relies on the use of the exponential mapping $\exp_x$, while in practice, the use of a retraction $R_x$, which is more computationally efficient. For more details, please see *[the response to Reviewer qfKq](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=wtPNBvN9LG)*. ### **Replies to questions:** ***Why is only one network considered?*** To illustrate the applicability to more networks, we randomly generate a different graph topology with uniformly distributed weights, and test all algorithms in the same setting as in Section 7 of the manuscript. The graph topology and experimental results can be seen in https://ibb.co/cKdYKdZ6, and remain similar to those obtained with the original network. We will include more experiments in the revised manuscript. ***How does the algorithm scale with the number of agents $K$?*** The computational complexity is related to $K$. The proposed algorithm is parallelizable, with an adaptation cost that is constant per agent and an overhead cost of the combination step. The latter depends on the number of neighbors connected to each node in the graph, and in the worst-case scenario, it can increase linearly with $K$. For more details, please see the discussion on the computational complexity in the previous response.
Summary: The paper studies online distributed optimization on manifolds, and proposes Riemannian diffusion adaptation in which each agent keeps running two steps until convergence: 1) execute R-SGD; 2) combine outputs of neighboring agents by running one step of RGD over the associated penalty function which characterizes the network agreement. The proposed algorithm is shown to converge to a consensus with high probability provided that a sufficiently small step-size is used. Similar results are established on the convergence of objective function. Experiments are conducted on two instances, i.e., distributed PCA and distributed GMM inference, showing better performance of the proposed algorithm compared to baselines. Claims And Evidence: The paper seems to extend the Riemannian diffusion adaptation algorithm proposed in Wang et al., 2024b. The difference is that the minimization of the penalty function there is replaced by a gradient descent step here. But it remains unknown why this replacement performs better. Although the minimization of the penalty is time-consuming, it is possible to require a lot less iterations than the version here with one-gradient step. The main contribution seems to be theoretical analysis of this algorithm. It could be compared with the results in the Euclidean setting. Also, it was mentioned that a Lyapunov function is designed. But I didn't see it in the main text. The step size seems critical to the tradeoff between convergence and accuracy. It would be better if experimental results on it can be reported. Other minor issues: 1) denominators in Lines 275 and 282 are 0 when s=s0+1 Methods And Evaluation Criteria: see above Theoretical Claims: In the proof of Lemma 5.1, it said the inequality in Eq. (33) holds by Cauchy-Schwarz inequality. This seems wrong. Experimental Designs Or Analyses: Experimental designs follows Wang at el. 2024. Supplementary Material: I only checked the proof of Lemma 5.1 Relation To Broader Scientific Literature: This paper provides a theoretical analysis of a modified Riemannian diffusion adaptation algorithm. Essential References Not Discussed: No Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for reading our work and offering constructive suggestions. We provide clarifications to your comments as below. ***Lack of evidence that the replacement performs better:*** Thank you for this comment. We argue that the algorithm in (Wang et al., 2024b) is inefficient due to the inner-loop optimization when minimizing the penalty $P(\boldsymbol{\phi}_t)$. We support this claim with a numerical evaluation reported in *[the response to Reviewer qfKq](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=wtPNBvN9LG)*. From this result, we see that the penality minimization in (Wang et al., 2024b) does not require less iterations than our proposed approach, they actually require nearly identical numbers of iterations to achieve convergence. We also present the result in https://ibb.co/nsDtry65 for your convenience. ***Comparison of the theoretical results with the Euclidean counterpart:*** Thank you for this insightful comment. Compared to the Euclidean counterpart (Chen & Sayed, 2012; Sayed et al., 2013; Vlaski & Sayed, 2021), one essential difference of our results is the impact of the manifold curvature $\kappa$ (captured in the parameter $\zeta$). For example, our convergence rates can be slower for some highly curved manifolds with $\kappa<0$. Besides, for the network agreement, our results focus on the evolution of the penalty term $P(\boldsymbol{\phi}_t)$, while the work in (Vlaski & Sayed, 2021) can directly study the evolution of the network centroid due to the problem being linear in the Euclidean case (the reason can be found in *[the response to Reviewer tsm9](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=FQqPRWKwJt)*). For convergence, our results focus on the non-asymptotic evolution of the cost function, while the Euclidean counterpart (Chen & Sayed, 2012; Sayed et al., 2013) only studies the bound of MSD performance at the steady state. Also, their results can benefit from the ease of the linear space. We will add more detailed comparisons in the revised manuscript. ***Highlight the design of the Lyapunov function:*** Thank you for pointing out this vague definition. The Lyapunov function in our case is defined by $\Delta_s'=\mathbb{E}[J(\boldsymbol{w}_t')-J(\boldsymbol{w}^*)]$. The design of the Lyapunov function is special in this context due to the manifold curvature. While in the Euclidean case one can use a Lyapunov function $\Delta_s=\mathbb{E}[J(\boldsymbol{w}_t)-J(\boldsymbol{w}^*)]$, in the Riemannian case when "telescoping" the decrease in $\Delta_s$ in the analysis, the curvature-related term $\zeta$ prevents the cancellation of intermediate terms and makes this approach unfeasible. Thus, we use a specially designed "curvature aware" Lyapunov function $\Delta_s'=\mathbb{E}[J(\boldsymbol{w}_t')-J(\boldsymbol{w}^*)]$ inspired by (Zhang & Sra, 2016), which is a function of the "streaming average" of the iterates denoted by $\boldsymbol{w}_t'$, as defined in (29). The idea consists of averaging the iterates carefully using the curvature-related parameter $\zeta$ to obtain the desired cancellation of terms in (84) when telescoping the decrease of $\Delta_s'$ in the convergence analysis (please see details in Appendix B.2 of the manuscript). We will revise the manuscript to clarify the definition of this Lyapunov function and the reason behind its choice. ***Report the experimental results on the step size choices:*** Thank you for this suggestion. For our algorithm, the step sizes are indeed critical to the tradeoff between convergence speed and steady-state performance. We provide an illustrative experimental result as in https://ibb.co/Q7L00mzj, and will report more results in the revised manuscript. ***Fix the typo of zero denominators:*** Thanks for pointing out this typo, the denominator should be $s-s_0+1$ as in Appendix B.2 of the manuscript. ***Fix mistakes in the proof of Lemma 5.1:*** Thanks a lot for pointing out this mistake. The correct proof uses Jensen's inequality and include the missing condition that the adjacency matrix of the graph $C$ is left-stochastic, that is, $c_{\ell k}\geq 0, \sum_{\ell=1}^K c_{\ell k} = 1$ for each agent $k$ in the assumption of "Regularization on graph". This condition is fairly standard (Chen & Sayed, 2012; Sayed et al., 2013; Vlaski & Sayed, 2021) and can be assumed without loss of generality. The correct proof is produced below, and these modifications do not influence any other proofs of the manuscript, whose correctness has also been checked by us and other reviewers. _Proof_: From the definition of $\nabla P(\phi_t)$ and $P(\phi_t)$, we have $$ \lVert\nabla P(\phi_t)\rVert^2 = \sum_{k=1}^K\left\lVert- \sum_{\ell=1}^K c_{\ell k} \exp_{\phi_{k,t}}^{-1}(\phi_{\ell,t})\right\rVert^2 \leq \sum_{k=1}^K\sum_{\ell=1}^K c_{\ell k} \left\lVert\exp_{\phi_{k,t}}^{-1}(\phi_{\ell,t})\right\rVert^2 = 2P(\phi_t), $$ where the inequality follows from Jensen's inequality and the assumption that $C$ is left-stochastic.
Summary: This paper presents a novel Riemannian generalization of a diffusion adaptation strategy for distributed optimization. The distributed optimization aims at finding an optimal solution with consensus among different agents. The proposed algorithm utilizes the Riemannian exponential map on manifolds and obtains a non-symptotic convergence under appropriate assumptions. In particular, a network agreement among the agents is guaranteed after sufficient iterations, which minimizes the (locally) convex risk function. The experimental results demonstrate the convergence when the Riemannian exponential map is replaced with an appropriate retraction map. Claims And Evidence: The claims made in this submission are accurately stated and supported by convincing evidence and discussions. Except for the following statement ``A work extending the diffusion strategy to manifolds was introduced in (Wang et al., 2024b), but the algorithm is inefficient due to inner-loop optimization ...'', which is not supported by any experimental evidence. >> REVIEW UPDATE: Authors have responded to this concern and included supporting experimental result. Methods And Evaluation Criteria: The proposed methods and the evaluation criteria make sense for the problems of interest in this paper. Theoretical Claims: Yes, I checked and confirmed the correctness of the proofs. In particular, the proofs of Lemma 5.11, Theorem 5.12 and Theorem 5.15 are carefully examined. Experimental Designs Or Analyses: I checked the experimental designs and analyses. I find them organized well overall but I have some concerns that are not clearly addressed in: ``For computational simplicity, we replace the exponential maps in the updates (3) and (4) with approximate retractions.'' In addition to the concerns on the approximate retractions, I also believe that the potential of the proposed algorithm is not fully demonstrated in the analyses. While the Riemannian centralized method obtains the superior performances in terms of elapsed comput time, one of the most important advantages of decentralized optimization is the parallel computations over the agents. Accumulating the elapsed comput time blur the parallel computing advantages of the proposed algorithm. Please refer to the ``Questions For Authors'' section that explain my concerns. >> REVIEW UPDATE: Authors have clarified the concerns and confusions addressed above. Supplementary Material: Yes, I have reviewed the material on the proofs and the detailed experiment setup in the appendix. Relation To Broader Scientific Literature: This paper proposes the first Riemannian diffusion adaptation strategy with guaranteed convergence for general manifolds. The proposed algorithm outperforms the existing Riemannian diffusion adaptation strategy [CGHS'21] specifically designed for PCA problem and generalized extrinsic consensus strategy from [NOP'10,LZZHZL'17] for GMM. This paper also claims bad performance of the Riemannian diffusion adaptation strategy introduced in [WBR'24b], but the claim is not supported by numerical evidence. Essential References Not Discussed: This paper focuses on the diffusion adaptation strategy that utilizes the full gradient information and there is also a stochastic diffusion adaptation strategy proposed in Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data by Jiaojiao Zhang, Jiang Hu , Anthony Man-Cho So, Mikael Johansso, published in Proceedings of the AAAI Conference on Artificial Intelligence. It is worth mentioning the recent developments in Riemannian federated learning as a counterpart of decentralized algorithms for distributed optimization on manifolds. In particular, there are recent arXiv preprints: Riemannian Federated Learning via Averaging Gradient Stream by Zhenwei Huang, Wen Huang, Pratik Jawanpuria, Bamdev Mishra. Federated Learning on Riemannian Manifolds with Differential Privacy by Zhenwei Huang, Wen Huang, Pratik Jawanpuria, Bamdev Mishra. Other Strengths And Weaknesses: Overall, the algorithmic design and theoretical analysis of the novel Riemannian diffusion adaptation strategy proposed in this paper is significant. In particular, the guaranteed convergence result for the Riemannian diffusion adaptation strategy as well as the techniques used to prove it are essential for this rising topics of distributed optimization. Other Comments Or Suggestions: I do not have comments other than those that have been addressed in other sections. Questions For Authors: The following questions/concerns stand out in my opinion. 1. The algorithmic design and theoretical analysis of the Riemannian diffusion adaptation strategy proposed in this paper are entirely based on the Riemannian exponential mapping. Why is this mapping replaced by the approximate retraction mapping in the numerical experiment? Does the theoretical analysis still apply to the approximated version of the diffusion strategy? 1. A much more important follow-on question: please be specific whether the approximate retraction mapping is employed in all algorithms that are being tested in the experiments. This is important because the experimental results are reported in terms of the elapsed compute time, which is significantly affected by the choice of update computation on manifolds. 1. What are the performance of the proposed method and other decentralized algorithms if the computations over agents are considered in parallel computing? As stated above, the algorithmic design and theoretical analysis of the novel Riemannian diffusion adaptation strategy proposed in this paper is significant and my opinions on this submission leaning towards accept. While it is a common practice to relax the Riemannian exponential mapping with retraction mapping, this submission did not address the implications of the approximate retraction mapping used in the numerical experiment properly, which hurts the soundness of the numerical experiments as raised in the first two questions. Since the convergence results are actually obtained for the ``weaker'' retraction mapping, this paper is certainly recommended for ICML in my opinion if the first two questions are clearly addressed. The third question is more of a favourable question that exploits more potentials in the proposed algorithm. It would not damage my evaluation of this paper if it is not responded. >> REVIEW UPDATE: Authors have clarified the concerns and confusions addressed above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reading our work (especially in checking the proofs) and offering constructive suggestions. In the following, we provide clarifications and answers to your comments and questions. ### **Replies to Weakness:** ***Support the claim that the method in (Wang et al., 2024b) is not efficient:*** Thank you for this insightful comment. To support this claim, we compare the performance and runtime between the work in (Wang et al., 2024b) (denoted as "Inefficient Riemannian diffusion") and the proposed algorithm in the same setting as in Section 7.1 of the manuscript. We examine these two algorithms and produce the results as in https://ibb.co/nsDtry65. From these results, we can claim that while the performance of these two algorithms is nearly identical, the proposed algorithm achieves a significantly reduced runtime. We will add this supporting experiment to the revised manuscript. ***Discuss more Riemannian federated learning works:*** Thank you for mentioning related work on Riemannian federated learning, which is an important rising topic and a counterpart of our research goal. We will add the suggested references to the revised manuscript. ### **Replies to Questions:** ***Why is the exponential mapping replaced by a retraction in the experiment? Does the analysis still apply?*** We suggest replacing the exponential map with a retraction for computational reasons, as a retraction can be more efficient than the exponential map for certain manifolds (Boumal, 2023). While the exponential map is convenient for theoretical analysis, the retractions often lead to more practical and efficient computations. We will further clarify the motivation for this choice in the revised manuscript. Our theoretical analysis, like many works in Riemannian optimization, e.g., (Zhang & Sra, 2016), is based on the exponential map. A key result in (Bonnabel, 2013) states that $d(R_x(\mu\cdot v),\exp_x(\mu\cdot v))=O(\mu^2)$, meaning that for small $\mu$, a retraction closely approximates the exponential map. The main approach to proving convergence with retractions involves showing that the iterates of the algorithm remain close to those of an equivalent version using the exponential map, which holds as $\mu \to 0$ (Bonnabel, 2013). This argument typically relies on diminishing step sizes, whereas our analysis is designed for constant step sizes, which are crucial for continuous adaptation and learning. Some works also employ the _pullback_ operator $f\circ R_x$, i.e., the composition of the cost function $f$ and a retraction, to establish convergence. However, these approaches requires assumptions that may be less natural, such as the convexity and smoothness of the pullback operator, see Chapter 4 of (Boumal, 2023). Thus, we believe that extending the proposed theoretical analysis based on a retraction is an exciting, though non-trivial research direction. We will discuss this limitation of the theoretical analysis in the revised manuscript. ***Whether the retraction is employed in all algorithms?*** All the algorithms use the same retraction for a fair comparison. The experimental results are reported in terms of _time_, representing the time index of receiving streaming data $\boldsymbol{x}_t$, which can also be regarded as the iteration index of the algorithm in the stochastic algorithms. To avoid possible ambiguity, we will replace "time" with "iteration" and update all the related figures in the revised manuscript. ***What is the performance in parallel computing?*** The proposed method, like most decentralized algorithms, can benefit from parallelization. Given the response to the last question, all reported performance results (replacing "time" with "iteration") can be regarded as computations performed across agents in a parallel computing setting. This can also be seen when we analyze how the computational complexity scales with the number of agents $K$ in the network, keeping in mind that all operations (i.e., both the adaptation and combination steps) are fully parallelizable over the agents. For a detailed discussion on the computational complexity of the proposed algorithm, please refer to *[the response to Reviewer xKFw](https://openreview.net/forum?id=5tyvHfhRFZ&noteId=Lgg3fpLQRg)*.
null
null
null
null
Learn to Vaccinate: Combining Structure Learning and Effective Vaccination for Epidemic and Outbreak Control
Accept (poster)
Summary: The paper studies an important problem of picking nodes to vaccinate in a network, assuming an SIS model of epidemic spread. The authors assume the network is not known, and needs to be learned. This is an interesting extension, since most prior work assumes the network is known. The authors present an algorithm for optimal vaccination set when the network has bounded treewidth. In their experiments, they use another algorithm which greedily reduces the spectral radius the most in each step. The authors show their algorithms have slightly better performance compared to many other baselines. Claims And Evidence: Seem ok Methods And Evaluation Criteria: While it is ok to assume a meta-stable state, as in (Van De Bovenkamp & Van Mieghem, 2014), I am not sure it justifies the stationary distribution of the form stated in section 3.2, from which the authors are making assumption 3.1. This needs more discussion, and the response doesn't seem adequate. The graphs learned can have large treewidth, and the greedy algorithm is needed to ensure poly time Theoretical Claims: Seem ok, but haven't verified all the proofs in the appendix Experimental Designs Or Analyses: Seem ok. Prior work on network construction using SIR model gives bounds on the sample complexity. The authors mention Theorem 3.1 bounds the sample complexity. That is not clear, and it is not clear how this is done in the experiments. Supplementary Material: Checked the experimental results Relation To Broader Scientific Literature: Prior work on vaccination strategies has assumed the network is known. So the setup here is interesting, though it is not clear how practical this is. It is assumed that all infections are detected, but edges are not known. In practice (as during COVID), infection states seem just as sensitive (or maybe more). So maybe a more realistic combination of observations of edges and infection states is realistic Essential References Not Discussed: Some others on network inference, e.g., (Abrahao et al., Trace Complexity of Network Inference, 2013), and for characterization of the SIS model, e.g., (Ganesh et al., INFOCOM 2005) Other Strengths And Weaknesses: The setup is interesting, since networks are not fully known. The results are promising. However, the seems limiting that the results are shown for very specific parameters. So not sure how this works for the broader parameter regime. The DP algorithm only works for treewidth bounded networks. What is the bound on the treewidth of the inferred networks? The authors would need another step for finding the treewidth, which can only be approximated. How data is used in the network inference step is not very clear, unlike the other network inference problems. Do you need to keep observing the infection states for a long time? Finally, as mentioned above, assuming infection states are fully observed but not the edges doesn't seem any more realistic or practical, and needs better motivation Other Comments Or Suggestions: Please see above Questions For Authors: Please address the weaknesses mentioned above: -- results are shown for very specific parameters, performance in other regimes -- treewidth of inferred network. Can it be large? If so, how will the DP algorithm be used? -- how long do you need to observe, sample complexity -- model of observation of infection states Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and thoughtful comments. We address each of the points raised below and will clarify them in the revised manuscript. ### Stationarity We refer the reviewer to our response to Q6 of Reviewer Gdj4 for a more detailed response. In short, either the process dies out quickly, in which case vaccination is unnecessary, or it enters a meta-stable regime where it behaves like a Markov chain with a stationary distribution, justifying Assumption 3.1. ### Comparison with the SIR model and sample complexity We thank the reviewer for this observation. Prior work on SIR-based structure learning relies on observing multiple independent cascades, such as Gomez-Rodriguez et al. 2012, since in SIR models each node can be infected at most once. As a result, interactions between nodes are limited within each cascade, and restarting the process is necessary to collect sufficient signal. In contrast, the SIS model permits reinfection, enabling a single persistent cascade to generate rich temporal correlations over time. Our learning algorithm leverages this property and thus does not require multiple independent cascades. Consequently, our sample complexity guarantees given in Theorem 3.1 are expressed in terms of the number of times specific infection patterns (e.g., $I(Y_j = 0, Y_i = 1, Y_S = y_S)$) appear in the data, rather than the number of cascades. Empirically, as shown in Figure 3 (Appendix A.2), SISLearn achieves an F1 score of 0.80 with just 400 rounds of observation, demonstrating strong learning performance. ### Observation of edges vs infection states We agree that combining partial edge and infection information is a realistic and important direction. We note that the setting of unknown edges but observed infection states is standard in prior work on SIR-based structure learning, like Gomez-Rodriguez et al., 2012 or Netrapalli & Sanghavi, 2012, which motivates exploring the analogous setup in the SIS setting. That said, our approach is highly flexible. Since SISLearn learns the neighborhood of each vertex independently, it can easily incorporate partial edge knowledge. Similarly, it can tolerate missing infection data: as long as some infection history is observed for every vertex and pairs of consecutive observations (e.g., at times $t$ and $\(t{+}1\)$) are available, the algorithm remains effective, though naturally requiring more samples. ### Specific parameters We emphasize that our parameter choices were guided by the goal of evaluating non-trivial regimes of the SIS model—where vaccinations are both necessary and impactful. If the infection-to-recovery ratio is too low, the disease naturally dies out without intervention, making vaccination redundant. If it is too high, the process becomes effectively unstoppable without unrealistic levels of vaccination. The intermediate regime is where extinction dynamics are sensitive to targeted interventions—precisely the setting where intelligent strategies matter. To demonstrate robustness beyond this regime, we conducted additional experiments on both low (Figure 5) and high (Figure 6) infection-to-recovery ratios; results are provided [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn). As can be observed, our combined learning-and-vaccination approach still outperforms the other baselines. If the reviewer has a specific parameter regime in mind, we would be glad to run additional experiments. ### Tree width of inferred networks The China flu graphs (Sec. 5) and USA flu graphs (App. A) have treewidths of 8 and 58, respectively, computed exactly using the state-of-the-art treewidth solver from the PACE 2017 competition (Tamaki, 2019). For graphs with large treewidth (tw > 20), our proposed Greedy algorithm is preferable due to its polynomial-in-n complexity, making it significantly more efficient. Please also see our answer to the computational feasibility comment of Reviewer WHtL, where we discuss new experiments (new Figures 1 & 2). ### Usage of data & observation horizon Unlike SIR-based inference methods that rely on multiple independent cascades, SISLearn uses infection data from a single, ongoing epidemic, needing only the infection states of the vertices over time—even this can be relaxed, as discussed above. While our theoretical guarantees require that the process has reached meta-stability, we show in Appendix A.2 that SISLearn achieves an F1 score of 0.80 with just 400 rounds of data, without needing to wait for meta-stability to be reached. ### References Gomez-Rodriguez M, Leskovec J, Krause A. Inferring Networks of Diffusion and Influence. *ACM Trans Knowl Discov Data*. 2012. Netrapalli P, Sanghavi S. Learning the graph of epidemic cascades. *SIGMETRICS Perform Eval Rev*. 2012. Tamaki H. Positive-instance driven dynamic programming for treewidth. *J Comb Optim*. 2019. --- Rebuttal Comment 1.1: Comment: I am not very convinced about the response to the computational feasibility. You should acknowledge in the paper that the treewidth can be very large, so you really need the greedy algorithm also, in order to get polytime. For the stationarity part, it seems confusing because the paper starts with a discrete time process description, and then they make assumptions for this. That should be clarified --- Reply to Comment 1.1.1: Comment: ## Computational feasibility We thank the reviewer for their follow-up. We fully agree that the treewidth can be large in practice, and we will explicitly state in the revised manuscript that for graphs with $\text{tw}>20$, we recommend using the greedy algorithm. Our greedy method (Algorithm 4) is fast, scalable, and performs well on dense graphs with thousands of nodes, as demonstrated in new Figure 2. However, we believe it is important to emphasize two additional key insights regarding our DP algorithm, which further underscore the significance of our work: ### First polynomial-time algorithm for SRM on bounded-treewidth graphs: Although the SRM problem is known to be NP-hard in general (Van Mieghem et al., 2011), we have shown—for the first time—that SRM can be solved optimally in polynomial-in-$n$ time on graphs with a bounded treewidth (lines 309–320). Specifically, if the treewidth is upper-bounded by **any** constant (independent of $n$), our DP algorithm runs in polynomial time in $n$. To the best of our knowledge, this was so far an open question. The solution to this question is a novel theoretical contribution of this paper, independent of the practical runtime of the DP algorithm. ### Significant practical speedup for exact SRM solutions: Furthermore, even on graphs without constant treewidth bounds—where exponential runtime is inevitable for exact solutions—our DP algorithm is **five** orders of magnitude faster than the previously known approach for exactly solving SRM, as demonstrated in Figure 7 of our newly conducted simulations. Note that this practical runtime advantage **includes** the full end-to-end process: computing the treewidth, decomposing the graph into a nice tree decomposition, and running our DP algorithm. To summarize, here is how we envision our methods being applied in practice to solve the Vaccinating an Unknown Graph (VUG) problem: 1. Observations are collected and the underlying graph is inferred using SISLearn. 2. A fast, polynomial-time algorithm (such as Theorem 2 of Korhonen & Lokshtanov, 2023) is used to estimate an upper bound on the graph’s treewidth. 3. If the estimated treewidth bound is below $20$, our DP algorithm provides an optimal solution efficiently. Otherwise, our scalable greedy heuristic serves as an effective alternative. We will clearly emphasize these considerations in our revised manuscript to prevent any confusion regarding computational feasibility, and we hope this addresses the reviewer's valid concerns. ## Stationarity We thank the reviewer for their attention to the stationarity assumption. To clarify, the paper consistently models the SIS process as a discrete-time Markov chain throughout. If the confusion stems from our referencing of works on continuous-time processes, we note that discrete- and continuous-time SIS models exhibit similar qualitative behavior under comparable parameter regimes. Regarding stationarity: we cannot in general assume that any Markov chain has a non-trivial stationary distribution. In fact, in our case, the only true stationary distribution is the absorbing all-zero state. However, it is well-established (e.g., Cator & Van Mieghem, 2013) that in parameter regimes above the epidemic threshold, SIS dynamics will either go extinct quickly, or enter a _meta-stable_ regime—where the distribution over configurations remains approximately constant for a long period prior to extinction. Our Assumption 3.1 formalizes this idea: we assume that samples are drawn during the meta-stable phase, where the distribution is stable enough to allow reliable estimation. We will make this point more explicit in the revised manuscript. To support this further, we conducted experiments (new Figure 3 in file linked below) showing that for the China flu graph of the main paper, the process typically reaches this stable regime within 50–200 steps, depending on infection parameters. As expected, processes below the epidemic threshold (shaded region) do not reach meta-stability and go extinct quickly. Finally, we note that even without assuming meta-stability, our learning algorithm performs well in practice: in all experiments, data collection begins at time step 1 without enforcing stationarity, demonstrating robustness to deviations from this assumption. ### Thank you again for your careful review and thoughtful comments. We believe we've addressed your main concerns, clarified all technical points, and demonstrated both the soundness and significance of our contributions. We would greatly appreciate it if you could reconsider your evaluation. **Link to new figures: https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn** ## References Korhonen, T. and Lokshtanov, T. An Improved Parameterized Algorithm for Treewidth. In *STOC 2023*. 2023. Cator, E. and Van Mieghem, P. Susceptible-infected-susceptible epidemics on the complete graph and the star graph: Exact analysis. In *Phys Rev E*. 2013.
Summary: The paper tackles the critical challenge of minimizing disease extinction time in Susceptible-Infected-Susceptible (SIS) models under unknown contact networks. The authors propose a two-stage framework: - Network Inference: A novel inclusion-exclusion learning algorithm with provable sample complexity bounds - Vaccination Optimization: (i) Optimal dynamic programming for bounded tree width graphs, (ii) Efficient greedy heuristic for general graphs Experimental validation on real-world influenza outbreak data demonstrates superior performance over baseline methods. Claims And Evidence: • Theoretical Foundations: - Formal proofs for structure learning guarantees (Theorem 3.1) - Spectral radius-extinction time relationship analysis (Theorem 4.1) - Time complexity analysis of vaccination algorithms (Theorem 4.2) • Empirical Validation: - Consistent outperformance on Beijing influenza transmission networks - Robustness tests with varying observation data sizes - Statistical significance analysis through confidence intervals Methods And Evaluation Criteria: • Proposed Methodology: - Structure Learning: Leverages infection state correlations via inclusion-exclusion principle - Vaccination Strategies: (1) Dynamic programming using tree decomposition (bounded treewidth) (2) Greedy vertex removal guided by spectral impact (general graphs) • Evaluation Framework: - Dataset: Augmented OutbreakTrees network with probabilistic edges - Metrics: Infection proportion dynamics, spectral radius reduction - Baselines: Comparative analysis against degree/random vaccination strategies Theoretical Claims: Key theoretical contributions: - Theorem 3.1: Establishes polynomial sample complexity for graph recovery - Theorem 4.1: Proves spectral radius directly impacts epidemic extinction time - Theorem 4.2: Demonstrates time complexity in treewidth size - Proofs employ established techniques from spectral graph theory and Markov processes, with detailed derivations in supplementary materials. Experimental Designs Or Analyses: Experimental Setup: - Simulated SIS dynamics on Beijing influenza contact network (2009) - Parameters calibrated to real-world transmission rates - Simulation on vaccination strategy Key Findings: - Proposed method reduces infection prevalence compared to degree centrality vaccination - Spectral radius reduction with only a low amount of node vaccination - Maintains performance stability across observation data sizes Supplementary Material: The supplementary materials provide: - Complete proofs for all theorems - Extended algorithm pseudocode for tree structures - Additional sensitivity analysis of learning parameters - Computational complexity comparisons across graph types Relation To Broader Scientific Literature: This work innovatively bridges two research domains: - Extends Ising model techniques to SIS dynamics - Epidemic Control: Advances spectral radius minimization (SRM) approaches by removing prior structural knowledge requirements - Distinct from existing SRM literature that assumes known networks, this study addresses realistic partial observation scenarios. Essential References Not Discussed: While comprehensive, the paper could engage with Graph neural networks for epidemic modeling (e.g., GraphSAGE) Other Strengths And Weaknesses: - Model Extensions: Incorporate SIR or SEIR dynamics and heterogeneous vaccination effects - Algorithm Enhancement: Explore graph neural networks for large-scale networks - Practical Considerations: Address implementation constraints (e.g., phased vaccination) Other Comments Or Suggestions: NA Questions For Authors: - How does SIS Learn perform when infection vary over time? Could adaptively improve robustness to such changes? - For graphs with tree width ω = 10 and size n = 1000, does the DP approach remain computationally feasible? What are the practical runtime limits? - Given the success of graph neural networks (GNNs) in intervention strategies, why were they not included as baselines? Were there computational or methodological constraints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment and their questions. We answer them below. ### Adaptivity Our learning algorithm, SISLearn, relies on data drawn from the meta-stable distribution of the SIS process, where the probabilities of configurations do not change anymore. However, it can accommodate time-varying infection probabilities, provided the changes are infrequent enough for the system to reach a new meta-stable state between shifts. As shown in Figure 3 of our additional experiments [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn) (also see our response to Q6 of Reviewer Gdj4), the meta-stable state is reached in as few as 50 time steps depending on the SIS parameters. Importantly, since SISLearn sequentially learns each node’s neighborhood independently, even if the infection parameters shift mid-process, it suffices to re-estimate the new infection probabilities and allow the process to reach the new meta-stable state. Under these conditions, the theoretical guarantees would hold with minor changes. It should be noted that in practice, SISLearn performs well even without waiting for full stabilization. All experiments (main paper and appendix) use observations starting from the first round, and the algorithm still achieves high learning performance (see experiments in Appendix A.2). Moreover, SISLearn is robust to parameter misspecification. It only requires an estimate of the infection probability, and even if this input is significantly incorrect, the algorithm performs well in practice. In new experiments where the true infection probability was 0.3 but the input was set to 0.7, SISLearn still achieved an F1 score of 0.80 (compared to 0.97 with the correct value), showing graceful degradation under substantial misspecification. These new experiments are shown in Figure 4 [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn). ### Computational feasibility We conducted two additional experiments to explicitly evaluate the computational feasibility of our DP algorithm. First, we measured runtime on graphs with fixed treewidth $\omega = 10$ and increasing number of vertices (up to $n=2000$); results provided in Figure 1 [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn). These confirm that DP scales cubically in $n$, consistent with our theoretical analysis (Section D.3.3), and remains practical even for graphs with as many as $2000$ vertices! Second, we analyzed runtime on random Erdős–Rényi graphs with varying $n$ and naturally increasing treewidth; results provided in Figure 2 [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn). This experiment highlights that while DP runtime grows exponentially with increasing treewidth, our proposed heuristic (Algorithm 4) remains computationally efficient (scaling cubically as shown in Section 4.3) and achieves vaccination performance close to DP, as demonstrated in Section 5. Thus, in practice, for graphs with known small-to-moderate treewidth ($\omega \lesssim 20$), the DP method is recommended even for large graphs. Otherwise, the greedy heuristic is an effective and scalable alternative, offering near-optimal performance with significantly lower runtime. ### Regarding GNNs While GNNs have shown success in epidemic modeling (Liu, 2024), to the best of our knowledge, no existing work directly employs GNNs for learning or vaccination strategies in SIS models. Developing such a GNN-based baseline would itself constitute a substantial research effort beyond a simple baseline comparison. However, if the reviewer is aware of specific GNN-based approaches applicable to our setting, we would gladly include them in our evaluation. **Link to new figures: https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn** ### References Liu Z, Wan G, Prakash BA, Lau MSY, Jin W. A Review of Graph Neural Networks in Epidemic Modeling. *Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.* KDD ’24.
Summary: This paper studies the SIS model on an unknown graph. The process runs in discrete time. Any infected node becomes infected/susceptible in the next round with prescribed probabilities (independently of the rest of the graph). Any susceptible node can become infected from one of its infected neighbors. The goal is to vaccinate K individuals in a way to minimize the expected extinction time of the illness. When a vertex is vaccinated, the probability that it gets infected by a neighbor is decreased by a factor of $\alpha$ for all future rounds. Given a sequence of infection observations in discrete time, the authors propose an algorithm which first learns the graph and then decides which vertices to vaccinate. The graph learning step uses an inclusion-exclusion approach, first learning a superset of each vertex's neighbor set, and then paring it down using conditional independence properties of the infection process. The vaccination step solves a surrogate problem, namely Spectral Radius Minimization. The problem is solved in polynomial time for graphs with bounded treewidth. Experiments on a real dataset are included, comparing the proposed approach to several baselines. Claims And Evidence: Theorem 3.1 claims that under certain assumptions on the dataset (namely, the number of samples satisfying certain conditions, as well as a stationarity assumption), the inclusion-exclusion approach identifies the correct graph. Algorithm 2 provides a DP algorithm which runs in polynomial time (as long as the graph has bounded treewidth) and returns a set of K vertices whose deletion yields a graph with minimal spectral radius. The authors argue that solving this auxiliary problem is a good substitute for minimizing the expected extinction time. Methods And Evaluation Criteria: The datasets and benchmarks are reasonable. Additional experiments are found in the supplementary material. I found it hard to interpret the number of rounds in real-world terms. What would 2000 rounds correspond to in real time? If that is e.g. 100 years, then I don't find the stationarity assumption justified, since reducing the number of rounds significantly degrades the performance of the proposed algorithm (Figure 2). Also, since the first step in the algorithm is estimating the graph, there should be an empirical verification of graph recovery. Theoretical Claims: Lemma D.1 is false. Consider the case where 0 < p <= 1/2. Suppose that X_1 = 0 with probability 1/2 and X_1 = 2p with probability 1/2. Suppose all the X_i's are equal. Then $p = \frac{1}{n} \mathbb{E}[S_n]$. Set $\epsilon = p/2$ and $\delta < 1/2$. Then $\mathbb{P}\left(\left| \frac{1}{n} S_n - p \right| \geq \epsilon + \frac{1}{n} \right) = \frac{1}{2}$ for $n$ sufficiently large, contradicting the claim. Theorem 1.8 of Pelekis and Ramon (2017; arXiv version) is false with the same counterexample. The journal version appears to be missing the corresponding result. (Update below in light of rebuttal; score increased accordingly) Experimental Designs Or Analyses: (See above concerns about the number of timesteps and the missing verification of graph estimation) Supplementary Material: I identified the main helper lemma for the inclusion-exclusion result (Lemma D.1). Relation To Broader Scientific Literature: Epidemic modeling is a popular topic in network science. I am not aware of other results that require learning the network, so this is an interesting direction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their careful reading and attention to detail. The reviewer is absolutely right—Lemma D.1, as originally stated, was incorrect and developed too hastily. We have corrected the result below by applying Theorem 1.2 from Kontorovich & Ramanan (2008). We also address the reviewer's other concerns below. ### Lemma D.1 (Replacement) Given observations $\\{Y^{(t)}\\}_{t \in T_D}$ from the SIS process where $T_D$ are not necessarily consecutive time indices and $Y^{(t)} \in\\{0,1\\}^V$, under Assumption 3.1, for any subset $U \subseteq V$, any state $y_U \in \\{0,1\\}^{|U|}$, and any positive $\varepsilon$ and $\delta$, we have the following deviation bound on the unbiased estimator $$ \mathbb{P}\left(\left| \mathbb{P}(Y_U = y_U) - \hat{\mathbb{P}}(Y_U = y_U) \right| \geq \varepsilon \right) \leq\delta, $$ whenever $|T_D| \geq \frac{2\log(2 / \delta)}{\varepsilon^2 (1-\theta)^2}$, where $0 < \theta < 1$ is a constant depending on the graph structure and model parameters. Here, $\mathbb{P}(Y_U = y_U)$ is the marginal probability of the vertices in $U$ being described by state vector $y_U$ and $\hat{\mathbb{P}}(Y_U = y_U) = \varphi_U$ is the empirical estimate over $T = |T_D|$ samples. Proof: Let $\mathcal{S}$ be the state space of our Markov chain, i.e., $\mathcal{S} = \\{0,1\\}^V$, $T = |T_D|$ the number of observations, and $\\mathcal{S}^T$ be the product state space of $T$ observations. By definition, the process $\\{Y^{(t)}\\}\_{t \in \mathbb{N}} $ has the Markov property in time, hence so does a sub-sequence $\\{Y^{(t)}\\}\_{t \in T_D}$. We define $\varphi_U:\mathcal{S}^T \rightarrow \mathbb{R}$ as: $$ \varphi_U(Y^1, \dots, Y^T) = \frac{1}{T} \sum_{t=1}^{T} \mathbb{1} \\{Y_U^{(t)} = y_U\\}, $$ where $ \mathbb{1} \\{\cdot\\}$ is the indicator function. Notice that this function is $1/T$-Lipschitz with respect to the Hamming metric $d(X,Y) = \sum_{t=1}^T \mathbb{1} \\{X^{(t)} \neq Y^{(t)}\\}$ on $\mathcal{S}^T$. We apply Thm. 1.2 by Kontorovich & Ramanan (2008) for Markov chains, stating that for a $c$-Lipschitz function $\varphi$ on $\mathcal{S}^n$, $$ \mathbb{P}\\{|\varphi-\mathbb{E} \varphi| \geq t\\} \leq 2 \exp \left(-\frac{t^2}{2 n c^2 M_n^2}\right). $$ In our case, the sequence length is $T$ (replacing $n$ in the theorem), $c=1/T$, $M_n$ becomes $M_T$, and $t$ becomes $\varepsilon$. The bound then becomes $$ \mathbb{P}\\{|\varphi_U-\mathbb{E} \varphi_U| \geq \varepsilon\\} \leq 2 \exp \left(-\frac{\varepsilon^2}{2 T (1/T)^2 M_T^2}\right) = 2 \exp \left(-\frac{T \varepsilon^2}{2 M_T^2}\right), $$ where $M_T = (1-\theta^T)/(1- \theta)$ and $\theta$ is the Markov contraction coefficient given by $$ \theta = \sup\_{\\{Y^{\prime}, Y^{\prime \prime} \in \mathcal{S} \\}} \left\\|p\left(\cdot \mid Y^{\prime}\right)-p\left(\cdot \mid Y^{\prime \prime}\right)\right\\|_{\mathrm{TV}}. $$ We have $\theta <1$ since every configuration $Y$ can transfer into the all-zero configuration $\underline{0}$. Hence, for any two states $Y^{\prime}, Y^{\prime \prime} \in \mathcal{S}$ both $p(\underline{0}|Y^{\prime})>0$ and $p(\underline{0}|Y^{\prime \prime})>0$ and the support of $p(\cdot|Y^{\prime})$ and $p(\cdot|Y^{\prime \prime})$ is not disjoint which implies $\theta <1$. Therefore, $M_T <1/(1-\theta)$ and the probability bound becomes: $$ \mathbb{P}\\{|\varphi_U-\mathbb{E} \varphi_U| \geq \varepsilon\\} < 2 \exp\left(-\frac{T \varepsilon^2 (1-\theta)^2}{2}\right). $$ Finally, setting the RHS to be $\leq \delta$ and solving for $T$ we get $$ T \geq \frac{2\log(2 / \delta)}{\varepsilon^2 (1-\theta)^2}. \quad \square $$ We emphasize that Lemma D.1 is not a central result of our paper. It is used solely to derive concentration bounds for estimating the direct and conditional influence quantities, thereby enabling our sample complexity analysis. Crucially, the correctness of the SISLearn algorithm itself is unaffected by this lemma. With the corrected bound, all downstream results remain valid (modulo minor constant adjustments), and in fact, Lemmas D.2–D.4 become cleaner, as we can eliminate the $1/m$ term in favor of a dependence on the (constant) Markov contraction coefficient $\theta$, which depends on the graph structure and the infection parameters. ### Regarding rounds The real-world interpretation of a "round" depends on the application. It may correspond to seconds in financial or communication networks, or days in epidemiological settings. The time to reach stationarity similarly depends on the timescale of the underlying process. ### Regarding graph recovery We provide experimental results on the learning performance of SISLearn in App. A.2. We also performed new experiments on the robustness of SISLearn, given in new Fig. 4 [here](https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn) (see Q4 of Reviewer Gdj4). ### References Kontorovich L., Ramanan K. Concentration inequalities for dependent random variables via the martingale method. *The Annals of Probability*. 2008. --- Rebuttal Comment 1.1: Comment: (copying as it was originally posted as an "official comment") Thank you for addressing my main concern regarding Lemma D.1. An observation: your upper bound on \theta is implicitly 1 minus the probability of reaching the all zero state, from a worst-case initial state. Then 1-\theta is upper-bounded by the worst-case probability of reaching the all-zero state =: q. The extinction time is dominated by Geom(q), which means that the theoretical guarantee for T is larger than the square of the expected extinction time. This means that the strategy of learning the graph first does not come with a meaningful theoretical guarantee (as extinction would likely happen before T steps), though it might work well in practice. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their follow-up and observation connecting our sample complexity bound to extinction time. As the reviewer points out, in the worst case, $\theta = 1 - q$. However, this worst-case $\theta$ is governed by transitions from extinction (i.e., the all-zero state $\underline{0}$) and does **not** reflect the behavior of the chain in the meta-stable regime. Since the problem becomes trivial upon extinction, we are implicitly conditioning on survival. If one restricts the supremum in the definition of $\theta$ to configurations in $\\{0,1\\}^V \setminus \\{\underline{0}\\}$, the resulting contraction coefficient becomes strictly smaller, depending instead on the graph structure and infection parameters. Turning to our bound on $T$, we wish to clarify that the direction of the inequality cited by the reviewer is reversed. Our guarantee requires $T \geq \frac{C}{(1 - \theta)^2}$, where $C = \frac{2 \log(2 / \delta)}{\varepsilon^2}$. Since $\theta <1 - q$, it follows that $(1 - \theta)^2 > q^2$, and thus $\frac{1}{(1 - \theta)^2} < \frac{1}{q^2}$. Therefore, our required number of samples $T$ is **upper bounded** by $\mathcal{O}(1/q^2)$—not larger than it. In other words, contrary to the reviewer’s conclusion, our theoretical sample complexity is at most on the order of the square of the expected extinction time, not worse. These observations clarify that our theoretical bound is meaningful within the intended regime—i.e., while the process has not gone extinct and remains in its meta-stable phase. In this regime, the relevant contraction coefficient is smaller than the worst-case bound. Indeed, our experiments confirm that SISLearn performs well even with short observation windows (e.g., 400 rounds starting from $t = 1$, without waiting for meta-stability), supporting the practical relevance of the bound. We hope this clarification resolves the concern, and we sincerely thank the reviewer again for their thoughtful engagement with the technical details.
Summary: This paper is broadly about vaccinating the nodes of a network over time (subject to a total budget on the number of vaccinations) in order to minimize the expected extinction time of the epidemic. The SIS model is assumed, and interestingly, the network is not assumed to be known, but has to be learned. (SIS is a reasonable model where individuals can get infected repeatedly, such as with cholera.) The paper thus splits the task into two parts: (a) learning the network, and (b) vaccination. Theoretical results and experimental evidence are given. ## update after rebuttal: I have increased my score to "Weak Accept" given the rebuttals. Claims And Evidence: I find the following claims problematic/unclear: I ask the authors to clearly explain these. 1. In what sense is the algorithm "optimal" as claimed in the abstract? If it is in the sense of "if the underlying graph is a tree" as in Appendix B, this would be a very weak claim as trees are not at all natural models for disease/communication spread. 2. I assume the algorithm is not "online" in the sense that for each t, given the infection states up to time t, we have to immediately decide the set R_t of nodes to vaccinate at time t? That is, we assume the "offline" case where all the Y^t (for 1 <= t <= T) are given upfront? If so, why do the vaccination in multiple stages at all? 3. It does not seem a reasonable assumption to make that all nodes independently get infected with the same probability at time t = 0: how flexible can this be made? 4. I assume the disease parameters are known to the algorithm? 5. Give references in the literature to where social networks seem to have small treewidth. 6. Please justify the stationarity assumption: it appears strong to me. Methods And Evaluation Criteria: The methods appear reasonable. Theoretical Claims: I have asked the authors to justify the stationarity assumption. Experimental Designs Or Analyses: The experimental analysis appears adequate. Supplementary Material: I went over most of the supplementary material. Relation To Broader Scientific Literature: The connections to the existing literature look adequate to me. Essential References Not Discussed: N/A to my knowledge. Other Strengths And Weaknesses: Learning the network appears like a good problem to me. The vaccination model parametrized by alpha also looks good and reasonable. Other Comments Or Suggestions: Say *expected* in the initial discussion on extinction time as well, as you have done in defining the VUG problem. The paper is well-written in general. One typo: "combines network learning with strategic vaccination strategy" --> "combines network learning with a strategic vaccination strategy" Questions For Authors: Please respond in detail to my modeling/other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and their insightful questions. We respond to each of them in detail below. ### Q1 We propose three vaccination strategies, all of which solve the Spectral Radius Minimization (SRM) problem: (1) An optimal polynomial-time algorithm for trees (Appendix B, Algorithm 5). (2) An optimal (DP) algorithm that solves SRM on arbitrary graphs (Section 4.2, Algorithm 2), with polynomial runtime on graphs of bounded treewidth. (3) A greedy polynomial-time heuristic (Section 4.3, Algorithm 4). The abstract refers to the second method. By “optimal,” we mean that it exactly solves the SRM problem: given a budget $K$, it finds a subset of vertices of size $\leq K$ whose removal minimizes the spectral radius. This is stated and proven in Theorem D.1. ### Q2 The VUG problem is formulated as an online problem: the agent observes the infection states over time and must decide when and whom to vaccinate, subject to a global budget $K$ (Section 2). In particular, the agent is not required to vaccinate all $K$ nodes at once. That said, our proposed method instantiates this framework in an offline fashion: we collect $T_D$ rounds of observations, learn the graph using SISLearn, and then vaccinate all $K$ nodes at time $t = T_D + 1$. This choice reflects that, assuming the graph is learned perfectly, vaccinating all at once is optimal for minimizing extinction time. Exploring *adaptive* or *staggered* vaccination—where the agent begins intervening while still learning—is a compelling direction for future work. ### Q3 This assumption is not essential for our approach or theoretical results. All our lemmas, theorems, and algorithmic results hold under different initial infection states, whether they involve a single infected vertex, an arbitrary deterministic or probabilistic subset, or any other initial configuration. We adopted the standard uniform-seed initialization solely because it is common in the SIS literature. ### Q4 The only disease parameter SISLearn requires is the infection probability, $p_\text{inf}$, which can be estimated from observational data, like done in Kirkeby et al. (2017) via a mean-field approximation. Importantly, SISLearn is robust to parameter misspecification, as evident by new experiments where the true infection probability was 0.3 but the input was set to 0.7, and SISLearn still achieved an F1 score of 0.80 (compared to 0.97 with the correct value). These new experiments are shown in new Figure 4 (see linked file below). ### Q5 While real-world social networks may not exhibit small treewidth, our work addresses this directly: our DP algorithm is ideal for graphs with small to moderate treewidth (e.g., up to ~10-20; see new Figure 1, and our response to Reviewer WHtL), while our fast and scalable greedy heuristic (Algorithm 4) performs well on general graphs—including those with high density (see new Figure 2). ### Q6 Our assumption of stationarity arises naturally from well-established theoretical results for SIS-type processes. It has been shown that if the infection parameters are above the epidemic threshold $\rho(\mathcal{G}) \geq p_{\text{rec}}/p_{\text{inf}}$, the process may reach a meta-stable distribution, resembling a stationary distribution (see, among others, Schonmann 1985, Liggett 1999, and Mountford et al. 2013). Conversely, if the threshold condition is not satisfied, the infection dies out rapidly, trivially resolving the vaccination problem. More formally, following the coupling argument of Cator and Van Mieghem (2013), the SIS process can be related to a modified Markov chain that excludes the absorbing all-zero state. This modified chain is ergodic and therefore has a proper stationary distribution. Thus, until extinction occurs, the original SIS process behaves like an ergodic Markov chain with a stationary distribution. In other words, either the SIS process (quickly) reaches a meta-stable state where our assumption holds, or it dies out. The former is backed up by new experiments, given in new Figure 3, where the SIS process reached the meta-stable state in an average of 100 rounds. Thus, the stationarity assumption, while seemingly strong, is both theoretically well-founded and empirically justified. ### New figures: https://drive.proton.me/urls/5N1PC0M6ZW#oCweKFODmBNn ### References Kirkeby C, et al. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods. *Sci Rep*. 2017. Cator E, et al. Susceptible-infected-susceptible epidemics on the complete graph and the star graph: Exact analysis. *Phys Rev E*. 2013. Liggett TM. Stochastic Interacting Systems: Contact, Voter and Exclusion Processes. *Springer*. 1999. Mountford T, et al. Metastable densities for the contact process on power law random graphs. *Electronic Journal of Probability*. 2013. Schonmann RH. Metastability for the contact process. *J Stat Phys*. 1985.
null
null
null
null
null
null
SymMaP: Improving Computational Efficiency in Linear Solvers through Symbolic Preconditioning
Reject
Summary: This paper introduces a novel method that employs a Recurrent Neural Network (RNN) to learn a sequence of operands for determining preconditioning parameters. The network is trained through a supervised learning approach, with the optimal parameters selected via grid search. Experimental results demonstrate that the proposed method effectively predicts three distinct preconditioning parameters across various datasets. Furthermore, it achieves significantly faster computation times compared to both default parameter settings and different fixed constants. ## update after rebuttal I have increased my score from 1 to 2. Claims And Evidence: Yes. Methods And Evaluation Criteria: The evaluation criteria include computation time and condition numbers, which are pertinent to the design of the preconditioner. Theoretical Claims: The work does not present any new theoretical claims. Experimental Designs Or Analyses: The experimental design is fundamentally sound; however, the study would benefit from additional experiments to further validate the proposed method. Specifically, it is crucial to evaluate the performance across datasets of varying sizes to assess the scalability and robustness of the approach. Moreover, while SymMap is specifically tailored for predicting preconditioning parameters, its comparative advantages remain unclear without **benchmarking against state-of-the-art learning-based preconditioners**. For instance, comprehensive comparisons with methods such as those proposed in [1-5] would provide valuable insights into the relative strengths and limitations of SymMap. These comparisons could include metrics such as convergence rates, computational efficiency, and generalization capabilities across different problem domains. [1] Li, Yichen, Peter Yichen Chen, Tao Du, and Wojciech Matusik. "Learning preconditioners for conjugate gradient PDE solvers." In International Conference on Machine Learning, pp. 19425-19439. PMLR, 2023. [2] Häusner, Paul, Ozan Öktem, and Jens Sjölund. "Neural incomplete factorization: learning preconditioners for the conjugate gradient method." Transactions on Machine Learning Research (2024). [3] Chen, J., 2024. "Graph neural preconditioners for iterative solutions of sparse linear systems." arXiv preprint arXiv:2406.00809. [4]Luo J, Wang J, Wang H, et al. Neural Krylov iteration for accelerating linear system solving[J]. Advances in Neural Information Processing Systems, 2025, 37: 128636-128667. [5]Grementieri L, Galeone P. Towards neural sparse linear solvers[J]. arXiv preprint arXiv:2203.06944, 20225 Supplementary Material: Not really, since I am not convinced by the experiment design. Relation To Broader Scientific Literature: This paper concentrates on the development of a learning-based preconditioner for solving partial differential equations (PDEs), which is closely linked to prior research [6, 7]. [6]Greenfeld D, Galun M, Basri R, et al. Learning to optimize multigrid PDE solvers[C]//International Conference on Machine Learning. PMLR, 2019: 2415-2423. [7]Hsieh J T, Zhao S, Eismann S, et al. Learning neural PDE solvers with convergence guarantees[J]. arXiv preprint arXiv:1906.01200, 2019. Essential References Not Discussed: There lacks the analysis about works related to learning based methods for solving linear systems, such as [1-5]. Other Strengths And Weaknesses: ### Strengths: - This paper presents a novel and intriguing approach by offering a symbolic framework for predicting preconditioning parameters. - The experimental results demonstrate that this method achieves reduced computational time. ### Weakness: - The related works are kind of overly succinct and would benefit from a more comprehensive analysis of related learning-based PDE solvers. - The **computational experiments are far from sufficient**. Although SymMap is specifically designed for generating preconditioning parameters, it is essential to include additional **learning-based preconditioner** methods for comparison on the same datasets such as [5], rather than limiting the analysis to merely the multilayer perceptrons (MLP) . - This method is a supervised approach. The process of obtaining the optimal preconditioning method through grid search is prohibitively expensive, limiting its applicability to large-scale problems. - The representation of feature parameters is somewhat simple. Did the authors consider **permutation invariance** in this representation method? Other Comments Or Suggestions: No. Questions For Authors: See the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and valuable comments. We respond to each comment and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ## **Weaknesses 1 & 2** **SymMaP enhances traditional preconditioning algorithms by discovering symbolic expressions for optimal parameters.** In our experimental design, we intentionally avoid comparisons with other learning-based approaches for the following reasons: 1. **Focus on Preconditioning Enhancement**: SymMaP is designed to improve traditional preconditioners without altering their core algorithms. A fair comparison would require other learning-based methods that also preserve the original preconditioning process—yet no such methods currently exist. 2. **CPU Compatibility**: As noted the Introduction (line 26, right column), linear solver environments are predominantly CPU-based. SymMaP’s symbolic expressions integrate seamlessly into CPU workflows, whereas neural network-based approaches lack efficient CPU deployment. 3. **Generality**: Section 3.2 (line 152, left column) highlights SymMaP’s adaptability to diverse preconditioners and linear solvers. In contrast, existing learning-based methods either: ‘combine with specific preconditioners’ (e.g., [4], [6]), or ‘target narrow use cases’ (e.g., [8], [2] for ICC-preconditioned CG; [1] exclusively for CG). 4. **Interpretability & Safety**: Numerical algorithms demand rigorous analysis (Section 3.2, line 129, right column). Opaque predictors (e.g., neural networks) risk violating theoretical constraints (e.g., avoiding ω ≈ 0 or 2 in SOR). SymMaP’s symbolic expressions enable proactive avoidance of such issues through analytical guarantees. To further address your concerns, we can compare SymMaP with other learning-based preconditioning methods (**disregarding the aforementioned factors**). Let us first categorize existing works: 1. Category 1: Methods that enhance Krylov solvers by adding optimization modules or modifying their iteration process (e.g., [4], [6]). These works target different components than SymMaP and could potentially be combined with our approach for further improvements. 2. Category 2: Methods that use neural networks to predict a preconditioning matrix (e.g., [8], [1], [7]). - **Experiment**: SSOR preconditioning for elliptic PDEs (same settings as the paper): - | method | none | PETSc default SSOR | [4] | [4]+SSOR | [4]+SymMaP+SSOR | [1] | [8] | SymMaP+SSOR | | -------- | ---- | ------------------ | ---- | -------- | --------------- | ---- | ---- | ----------- | | time (s) | 23.9 | 10.5 | 11.2 | 7.5 | 5.45 | 8.6 | 8.4 | 7.7 | - Training times: [1] (0.5h), [8] (0.5h), [4] (2h), SymMaP (0.25h). - Key takeaway: **SymMaP outperforms standalone learning-based methods and can combine with them for further gains**. - **We reiterate that direct comparisons are inherently unfair**—akin to contrasting symbolic (Mathematica) and numerical (MATLAB) PDE solvers. If reviewers identify similar works to ours, we welcome suggestions and will conduct comparative experiments accordingly. [8] Learning from Linear Algebra: A Graph Neural Network Approach to Preconditioner Design for Conjugate Gradient Solvers ## **Weaknesses 3** - Dataset size impact: SymMaP is fundamentally distinct from neural network training, as it seeks a symbolic expression to approximate the mapping relationship in the data rather than optimizing parameters. Our preliminary tests show that **even small datasets (~100 samples) suffice to learn high-performance symbolic expressions**. For consistency, all experiments in the paper use a dataset size of 1,000. - Additional experiments: To further address this concern, we conducted supplementary tests on SOR preconditioning for elliptic PDEs (settings identical to the main experiments), varying the training set size (100, 300, 500, 1,000). **Results show that regardless of the size of the data set, SymMaP can find the best symbolic expression in the experiment in our paper within 1000 seconds.** ## **Weaknesses 4** - Current design: Experiments intentionally avoid presupposing symmetry among input parameters, reflecting real-world scenarios where such relationships are unknown. - Future direction: We agree this is valuable and plan to develop a module that analyzes mathematical properties (e.g., symmetries) to constrain symbolic search spaces, improving efficiency and performance. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts in addressing the reviewers' comments. However, I do not find arguments 1–4 to be fully convincing. In my view, as long as the proposed method improves the performance of linear solvers, the following points should not be major concerns: 1. Whether the core algorithm is modified or not. 2. Whether GPU/CPU is used or not. Additionally, generality should not be the primary focus here. Instead, the authors could consider a given preconditioner and then determine the best learning-based method as the baseline for comparison. Regarding the experiments in the rebuttal, I find them unclear. Could the authors provide further clarification? Specifically, I would like to see a direct comparison between the proposed algorithm and state-of-the-art learning-based methods. The discussion on "enhancement" seems tangential—the core contribution should be the solver's performance, not auxiliary improvements. As other reviewers have noted (and as implied by my previous comments), the paper’s title emphasizes **linear solvers**, yet the experiments focus heavily on PDE-related instances. Given that linear solvers have broad applications (e.g., in optimization), could the authors include benchmarks from other domains to better demonstrate the method's versatility? --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up. Allow me to elaborate further. Detailed experimental data and specific settings are available at https://anonymous.4open.science/r/rebuttal3-90EF/Reviewer%20Wyqu%20.png. - **SymMaP Applicability** - SymMaP is suitable for: 1. Linear systems that can be parameterized (with a limited number of parameters, e.g., <1000). 2. Scenarios where preprocessing parameters require optimization. - Key advantages of SymMaP over other learning-based methods: 1. Superior performance in CPU-only environments. 2. High interpretability. 3. Flexibility in algorithm selection. - **Additional comparative experiments** - To address your concerns, we have expanded our comparative experiments with learning-based algorithms. - **Additional Datasets** - To demonstrate broader applicability, we tested: 1. Markov Chain(Optimization Problem): Boltzmann-distributed Brownian motion (nonsymmetric, parameterized by Chebyshev coefficients of potential energy). 2. Numerical Integration: Lippmann-Schwinger equation (symmetric, parameterized by potential’s Chebyshev coefficients). - **Summary** - Across all experiments, the "[4]+SymMaP" combination consistently achieved the shortest computation times. - When evaluating standalone algorithms (without combinations), SymMaP alone demonstrated the fastest performance in all test cases. - The performance advantage of SymMaP was particularly pronounced in CPU-only environments. - **These results conclusively demonstrate the superior performance characteristics of the SymMaP approach**. We sincerely appreciate your thoughtful questions. Due to space constraints in this response, we will include: 1.Complete experimental details 2.More comprehensive analysis 3.Proper citations for all referenced papers, in our final manuscript version. Should you have any further questions or require additional discussion, please don't hesitate to reach out. If we have adequately addressed your concerns, we would be grateful for your consideration in adjusting your evaluation score accordingly. Thank you for your time and valuable feedback.
Summary: This paper uses neural networks to introduce a matrix preconditioning framework via symbolic discovery, where preconditioning is important in linear system solving. This new framework can flexibly predict preconditioning parameters for different scenarios, which surpasses traditional methods focusing on individual scenarios. Additionally, this framework also enjoys efficiency and interpretability. Its performance is also shown by numerical experiments. Claims And Evidence: The authors tried to show the superior performance of their new method by numerical experiments, while I believe more validation should be included. Please see Bullet 1 in the "Methods And Evaluation Criteria" section for more information. Additionally, when talking about generalization and interpretability, it would be better to also have some theoretical results. Methods And Evaluation Criteria: 1. It would be better to illustrate the performance of this framework via (1) more applications in addition to PDEs with similar matrix sizes and (2) testing it with preconditioning methods with more comprehensive preconditioners, such as AMG with multiple parameters. Theoretical Claims: No theory. Experimental Designs Or Analyses: Yes. My main concern is about the general performance of this new method. Please see Bullet 1 in "Methods And Evaluation Criteria" for more information. Additionally, Supplementary Material: I went through the literature review and detailed parameter and experiment setup in the appendix. Relation To Broader Scientific Literature: I am not familiar with symbolic regression and just know some preconditioning techniques in optimization. Based on my understanding, this paper also explores using NN for matrix pre-conditioning, but it offers a new symbolic approach enjoying better efficiency in a pure CPU environment and interpretability. Essential References Not Discussed: I am not familiar with symbolic regression and just know some preconditioning techniques in optimization, so I read the review of this paper's previous submission to ICLR 2025 (https://openreview.net/forum?id=4WvCoXU2dF). It seems that the authors did not include some related literature about "alternative approaches to constructing optimal preconditioner parameters", recommended by the previous reviewers, in the new version. Other Strengths And Weaknesses: Although the idea is novel, the p Other Comments Or Suggestions: Please see all other sections. Questions For Authors: What are the differences between symmap 1 and 2? In Table 3, the gap between the conditional numbers corresponding to them is large. I wonder how to interpret and improve this type of instability. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and valuable comments. We respond to each comment and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ## **Methods And Evaluation Criteria 1** - In preliminary tests, we observed that for certain preconditioners (e.g., SOR), the optimal preconditioning parameters are largely independent of the matrix size(resolution) for a given problem. Thus, we fixed the matrix size in our experiments for consistency. To address your concern, we conducted additional experiments: - We evaluated SOR preconditioning for a second-order elliptic PDE, testing matrix sizes of ( $1 \times 10^4, 2 \times 10^4, \dots, 6 \times 10^4$ ) while keeping other settings identical to the main experiments. - The results show that the optimal parameters for the same problem but different sizes vary negligibly (difference < 0.01). - This confirms that matrix size has minimal impact on SOR’s optimal parameters. **For preconditioners where size does affect parameters, SymMaP can also easily incorporate the size as an additional input.** ## **Methods And Evaluation Criteria 2** - To further validate SymMaP’s versatility, we conducted experiments on AMG preconditioning with an SOR smoother, optimizing two parameters simultaneously: AMG’s threshold parameter θ (default in PETSc: 0) and SOR relaxation factor ω. - All other settings match the AMG preconditioning and second-order elliptic problem in the paper. We jointly optimize both parameters, with the condition number as the evaluation metric. - | method | None | θ= 0 ω=1 | θ= 0 ω=0.1 | θ= 0 ω=1.9 | Optimal constant | SymMaP | | ---------------- | ---- | --------- | ----------- | ----------- | ---------------- | ------ | | Condition number | 6792 | 163 | 168 | 163 | 159 | 156 | - **This demonstrates SymMaP’s ability to handle multiple preconditioner parameters.** We will include detailed results and analysis in the final version. ## **Essential References Not Discussed** Please see the response to reviewer Wyqu's **Weaknesses 1 & 2** ## **Q1** - As noted in Table 1’s caption (line 275), SymMaP 1 and 2 are the two highest-scoring expressions identified during symbolic learning. - **The disparity arises from the stochastic nature of the learning process**: Symbolic learning iteratively refines expressions by introducing new operators, which may non-monotonically improve performance (some additions help, others degrade it). Thus, the two expressions represent local optima with similar reward scores but differences in structure and performance due to the randomness of operator selection. ## **Q2** To explain the significant variation in condition numbers in Table 3 (row 343), we analyze from the following perspectives: 1. **AMG’s Impact on Condition Number**:AMG reduces matrix condition numbers through a multigrid strategy, leveraging coarse-grid correction and high-frequency error smoothing. On coarse grids, restriction and interpolation operators transform low-frequency errors (small eigenvalues) into high-frequency ones, which are then rapidly damped by fine-grid smoothing (e.g., Gauss-Seidel). This multiscale decomposition concentrates the eigenvalue spectrum, significantly lowering the condition number (ratio of largest to smallest eigenvalues). 1. **Role of Threshold Parameter θ**:θ controls strong connectivity during coarse-grid generation. If the coupling strength (off-diagonal entries) exceeds θ, the connection is retained; otherwise, it is discarded, affecting grid sparsity and approximation accuracy. 1. Too small θ: Retains excessive weak connections, leading to dense coarse grids that inadequately address low-frequency errors (small eigenvalues). 2. Too large θ: Over-sparsifies the grid, losing critical connections and increasing approximation error, hindering high-frequency error (large eigenvalue) reduction. 1. **Matrix Properties**:The matrices derive from differential operators, where the smallest eigenvalue (in matrix norm) reflects the operator’s minimal eigenvalue. Such matrices typically require stronger large-eigenvalue suppression, favoring smaller θ (0–0.5). The wide θ range in Table 3, tailored to operator specifics, explains the condition number disparity. 1. **Mitigation Strategy**:While condition numbers vary sharply, they change continuously with θ. Moreover, optimal θ values vary smoothly with operator parameters. Here, SymMaP effectively approximates these optimal parameters, resolving instability. This scenario motivated our studies. --- Rebuttal Comment 1.1: Comment: I appreciate the authors further comments. However, some statements might not be very convincing. Regarding the impact of matrix size, the experiments only test matrix sizes from 1*10^4 to 6*10^4, which even does not change the order of the size. Consequently, it might not be safe to directly state the size has minimal impact, and it would be better to show it. Regarding AMG with multiple parameters, it would be better to include more experiments than only testing it on one problem. Thus, I still worry about the general performance. Overall, this paper is more on the empirical side, so I expect more comprehensive numerical analysis. However, based on the previous two points, I would keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up questions. Please allow me to provide additional clarification and experimental evidence to address your concerns. **Optimal Relaxation Parameters Across Matrix Sizes** 1. To thoroughly investigate the relationship between matrix size and optimal SOR relaxation parameters, we conducted extensive testing across a comprehensive range of matrix dimensions: 1. Tested sizes: 1×10³, 2×10³, ..., 9×10³, 1×10⁴, 2×10⁴, ..., 1×10⁵, 2×10⁵, ..., 5×10⁵ 2. Key finding: Matrix size shows no correlation with optimal SOR relaxation parameters 3. Supporting evidence: Complete distribution of optimal parameters is available at https://anonymous.4open.science/r/rebuttal3-90EF/Reviewer%20osPR%20exp1.pdf. Their distribution is exactly the same. 2. Furthermore, we generate datasets with a mixture of different matrix sizes (uniformly 1×10³ to 5×10⁴). Detailed experimental data is available at https://anonymous.4open.science/r/rebuttal3-90EF/Reviewer%20osPR%20exp2&3.png. Conducted Experiment 2, which confirmed our algorithm's robust performance across all matrix sizes in SOR parameter optimization. **Multi-Parameter AMG Experiments** - To address concerns about multi-parameter preconditioning capabilities, we performed additional testing: - Extended AMG+SOR dual-parameter experiments across 6 datasets: - 4 original benchmark datasets - 2 new challenging cases: Markov Chain: Boltzmann-distributed Brownian motion (nonsymmetric, parameterized by Chebyshev coefficients of potential energy); Numerical Integration: Lippmann-Schwinger equation (symmetric, parameterized by potential Chebyshev coefficients) - Detailed experimental data is available at https://anonymous.4open.science/r/rebuttal3-90EF/Reviewer%20osPR%20exp2&3.png. Experiment 3 results conclusively demonstrate SymMaP's ability to effectively handle multiple preconditioning parameters. **Additional comparative experiments** - We have further supplemented the experiments comparing with other learning-based algorithms and the comparative experiments with expanded datasets. The specific experimental data can be found in https://anonymous.4open.science/r/rebuttal3-90EF/Reviewer%20Wyqu%20.png. - Across all experiments, the "[4]+SymMaP" combination consistently achieved the shortest computation times. When evaluating standalone algorithms (without combinations), SymMaP alone demonstrated the fastest performance in all test cases. - The performance advantage of SymMaP was particularly pronounced in CPU-only environments. - **These results conclusively demonstrate the superior performance characteristics of the SymMaP approach.** We sincerely appreciate your thoughtful questions. Due to space constraints in this response, we will include complete experimental details and more comprehensive analysis, in our final manuscript version. Should you have any further questions or require additional discussion, please don't hesitate to reach out. If we have adequately addressed your concerns, we would be grateful for your consideration in adjusting your evaluation score accordingly. Thank you for your time and valuable feedback.
Summary: This paper presents a new approach to finding the quasi-optimal parameter of two preconditionners in order to speed up the resolution of linear systems associated with PDEs. The authors have developed an algorithm that generates the dataset consisting of the PDE parameters and the optimal preconditionner parameter (found by grid search). This data is then given to an RNN, which converts it into a symbolic expression. This network is trained by reinforcement learning, comparing the difference between the true optimal parameter and the result of the symbolic expression. The authors have used symbolic expressions because they speed up inference and provide interpretable results. Experimental results show that the parameter provided by their algorithm is on average more efficient for solving linear systems. ## Update after rebuttal I would like to thank the authors for answering my questions and for clarification. The results provided in the tables on comparison with the literature and on new datasets not originating from PDEs are very interesting. I have therefore decided to increase my score from 3 to 4. Claims And Evidence: Firstly, SymMaP improves preconditioning efficiency for solving linear systems linked to PDEs. But experiments on more generic datasets are lacking. Secondly, Symbolic expressions are easy to interpret, since they are analytical and can be studied mathematically. However, there is no theoretical proof that SymMaP produces parameters close to optimal in general. Methods And Evaluation Criteria: The proposed method makes sense, even if the use of a transformer to predict the sequence of symbolic expressions could have been considered. And, as mentioned above, there is a lack of experiments with more generic datasets. Comparisons with other methods in the literature are also lacking (e.g. https://arxiv.org/pdf/2405.15557, https://proceedings.mlr.press/v202/li23e/li23e.pdf, https://www.sciencedirect.com/science/article/pii/S0045782521007374). Theoretical Claims: The mathematical formulas seem correct to me, but as mentioned above, there is no guarantee of the difference between the true optimum and that obtained by grid search. Experimental Designs Or Analyses: The article compares several preconditioners (SOR, SSOR, AMG) on several benchmarks from PDEs. The metrics compared make sense (computation time, condition number for AMG). However, comparisons with other methods from the literature are lacking. Other datasets are also lacking to see whether SymMaP generalizes. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The article makes a real contribution to preconditioning methods for solving linear systems, as it provides preconditioners that perform well on average for a given problem, whereas most preconditioners are effective only for a single instance of the problem. The article also uses symbolic expressions to interpret the parameters of the PDEs. Essential References Not Discussed: As mentioned above, the article does not compare itself to other methods in the literature that attempt to learn preconditioners for solving linear systems. Other Strengths And Weaknesses: The prediction of symbolic expressions is very original and very clear in the article. These symbolic expressions seem to have a real usefulness for the interpretability of PDE parameters and could be used in other cases. Other Comments Or Suggestions: I have no further comments or suggestions. Questions For Authors: 1. For AMG, why use condition number rather than computation time? Because, for large matrices, it's expensive to calculate and does not correlate 100% with computation time. 2. What was the choice of fixed constants in Tables 1, 2 and 3? Why these precise values? 3. What is the "tolerance" parameter in Figure 2, Table 1, 2 and 4? 4. How long did the MLP training last? Because if a user wants to use SymMaP on a new problem, he'll need to train the model. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and valuable comments. We respond to each comment and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ## **Methods 1** > And, as mentioned above, there is a lack of experiments with more generic datasets. (e.g. [1] [2] [3]) - In our main experiments (Section 5.1, line 374), we evaluate **5** **classes of physical equations**. For context: The cited works study fewer equation classes: [1] (2 classes), [2] (3 classes), [3] (5 classes, two of which are special cases of second-order elliptic equations). - Other related works are even more limited: [4] (3 classes), [6] (2 classes), [7] and [8] (1 class each). - Among our five classes, **Poisson’s equation** is discussed in [1–4, 6, 7], **thermal problems** in [2, 4], and **second-order elliptic equations** in [3]. - These datasets represent distinct, scientifically significant problem classes with varied mathematical properties (see Sections 5 and D.1). Crucially, **SymMaP outperforms baselines across all dataset-preconditioner combinations** (Tables 1–3), demonstrating its generalization capability. We will add citations to these works in the final version. If the reviewer suggests additional equations to test, we are happy to include them. ## **Methods 2** > Comparisons with other methods in the literature are also lacking (e.g. [1][2][3]). - Please see the response to reviewer Wyqu's **Weaknesses 1 & 2** - **Additional Note**: You mentioned [3], a neural network-based PDE solver. Similar works include PINN [5], which employs neural networks as optimizers to solve PDEs via gradient descent. These methods do not involve traditional linear solvers or preconditioning optimization. ## **Theoretical Claims** - As noted in Section 3.1 (line 100) and Figure 1 (line 67), we observe that **SOR, SSOR, and AMG preconditioner parameters exhibit continuous relationships with performance metrics**, typically with **one or two local minima**. Thus, grid search effectively approximates the optimal parameters. - Theoretical derivation of optimal preconditioner parameters is often intractable, justifying our grid-search approach. ## **Q 1** - We agree. As noted in Section 3.2 (C1, line 154), preconditioner objectives vary by scenario: - For stable systems, **minimizing runtime** is prioritized. - For ill-conditioned systems (e.g., certain elliptic PDEs), **reducing condition numbers** is critical to ensure solvability. - Our AMG experiments (Section B.2.2) focus on threshold parameters, which filter small values during graph aggregation. For some problems (e.g., second-order elliptic PDEs), setting this parameter to **0** minimizes runtime, so we prioritize stability here. ## **Q 2** - SOR: The relaxation factor in SOR ranges between (0, 2). For well-conditioned matrices, larger values (e.g., 1.5–1.9) are typically stable. For matrices with significant off-diagonal elements (e.g., nonlinear PDEs from turbulence), smaller values (e.g., 0.1–0.5) are preferred. Note that SOR reduces Gauss-Seidel when the factor is 1, which is also the default in libraries like PETSc. Thus, we test three fixed values: 0.2, 1, and 1.8. The same logic applies to SSOR. - AMG: Our preliminary tests show that the optimal threshold for our physical equations lies below 1.5. A threshold of 0 (or lower) disables coarsening, which is the default in PETSc and similar libraries. We therefore test three fixed values: 0, 0.2, and 0.8. ## **Q 3** - Tolerance is the **relative** **convergence** **criterion**: *∥b − Ax∥/r₀*, where *r₀* is the initial residual. We apologize for the ambiguity and will define this explicitly in the final version. ## **Q 4** - SymMaP uses **RNN-based symbolic learning.** Training time: **800s** (non-polynomial symbols) or **2600s** (with polynomials) (see Section "Computational Time," line 926). - In Section 5.2 (line 376), we compare SymMaP to an **MLP** (trained for **1.5 hours** to convergence). [1] Learning from Linear Algebra: A Graph Neural Network Approach to Preconditioner Design for Conjugate Gradient Solvers [2] Learning Preconditioners for Conjugate Gradient PDE Solvers [3] A Finite Element based Deep Learning solver for parametric PDEs [4] Neural Krylov iteration for accelerating linear system solving [5] Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations [6] Neural incomplete factorization: learning preconditioners for the conjugate gradient method [7] Learning to Optimize Multigrid PDE Solvers [8] Learning Neural PDE Solvers with Convergence Guarantees
Summary: This paper focuses on successive over-relaxation (SOR) which is an iterative method solving $Ax=b$ that parametrize the classical Gauss-Seidel (GS) method. The related parameter $\omega$ has to be tuned to ensure SOR converges faster than GS. While an analytical optimal expression exists, it depends on the spectral radius of $I - \text{diag}(A)^{-1}A$ which is expensive to determine in a general case. The matrix $A$ often comes from the discretization of a continuous differential equation. Authors propose to train a machine learning model to learn the relations between the coefficients of typical partial differential equations (PDE) - namely darcy flows, elliptic, biharmonic, poisson - and the optimal value of $\omega$. Their approach is not to simply build a predictor of this value but rather to train a generator for the expression of $\omega_{opt}$ that depends on the PDE parameters $\alpha$ e.g. length of the domain, coupling of dimensions, etc and some predefined tokens e.g. common operators and simple functions. Meaning should therefore be extractible from generated expressions. The length of expression has been chosen such that they would not be expensive to compute and could be plugged-in simply into a typical SOR linear solver. The authors found that their approach outputs satisfactory values for $\omega$ across instances of typical PDEs : while the generator does not systematically produce $\omega_{opt}$, it is tailored to each PDE instance and outperforms both $\omega=1$ (corresponding to GS which is default in PETSc, a state-of-the-art toolbox for solving PDEs) and selection of $\omega$ as the best constant across instances (which could be selected by an expert as an oversimplification). Thus authors claim that their model generalizes well. ## update after rebuttal I would like to thank the authors for addressing concerns. I did not change my review score. ### 1 From your argument to Reviewer Wyqu, I am curious how low the number of samples can get. If this number is low, there's a good avenue to test generalization by comparing two SYMAP generators trained on the same number of different PDEs. ## 2 The huge variance hardly suggests stability. I think a table is not the proper way to visualize the performance distribution ; nonetheless it improves the quality of the results. ## 3 See 1. ## 4 How stable does the matrix needs to be ? This measure of stability could be a measure of generalization. The table provides interesting results ; the detailed analysis of the final version would be an interesting read. ## Exp 2 A simple sample may not be enough but the additional graph looks interesting. ## Ref Thank you for the clarification. Claims And Evidence: The generator of expressions can output satisfactory $\omega$s., as evidenced by figure 2. The generator interpretability is mostly supported by the expression generated for the elliptical PDE which match with empirical heuristics (cf. Section 5.3). This evidence is relatively light but still insightful. While the cost of expressing $\omega$ may be light, the training cost seems deterring compared to the over-simplification of an expert. Take Table 1. For instance for the biharmonic equations the dataset was generated in 100 hours = 3000 systems solved @ PETSc speed. Generating the dataset becomes worthwhile after ~8k systems solved (or ~20k with $\omega=1.8$ as baseline). I believe the generalization claim to be relatively bold as authors provide only averages: more details on the distribution of compute times would be adequate (median, quartiles, ...). This claim could be substantiated further for instance by detailing differences in $(\alpha,\omega)$ with closest equations in train datasets: I fail to assess if the generator is memorizing/overfitting. Moreover, it seems to me that the generator would not be fit to use for entirely different PDEs where the parameters have not been trained on - this point might constitute actual generalization. Changing the underlying numerical scheme that produce $A$ may be another venue to showcase generalization (discretization steps but also entirely different schemes). Methods And Evaluation Criteria: My expertise does not allow me to assess the ML method in itself but the paper reads sound on the matter by deriving a gradient through published methods. The evaluation of multiple $\omega$ is sound as well. Theoretical Claims: The paper is mostly experimental. Experimental Designs Or Analyses: Table 1, 2 and 3 provide averages but could provide more insights on the actual distribution of compute times/condition numbers. Moreover, trajectories of iterations could be provided for samples of PDEs. Supplementary Material: I only read a few snippets of code. The material seems complete to test reproducibility, which I did not carry. Relation To Broader Scientific Literature: Typical findings for $\omega$ study properties on $A$ that help derive optimal $\omega$. Such properties can include spectral consideration. The tools developed by the authors could replace or assist researchers and engineers alike when they tune the SOR method. Essential References Not Discussed: The paper fails to consider adaptive methods that tune $\omega$ as the residual $Ax^{(k)} - b$ shrinks over iterations. For instance https://dl.acm.org/doi/10.1007/s11075-019-00748-0. Other Strengths And Weaknesses: The study is original. Other Comments Or Suggestions: There are typos in Figure 3. I think the paper should be more humble by "parametrizing" the class of PDEs. For instance, SymMAP fitness is not tested on any Second order elliptic equation of the form $$a_{11} u_{xx} + a_{12} u_{xy} + a_{22} u_{yy} + a_1 u_x + a_2 u_y + a_0 u = f$$ but specifically on cases where each $a_i$ is sampled in $(-1,1)$ and $a_{12} \in (-0.01,0.01)$. What happens for cases that do not map well to the distribution of coefficients ? Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and valuable comments. We respond to each comment and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. - ## **Claims And Evidence 1** - As noted, data generation can be time-consuming. However, symbolic learning in our framework requires significantly less data compared to other AI-based approaches. For details, please refer to our response to **Reviewer Wyqu’s Weaknesses 3**. ## **Claims And Evidence 2** To address this concern, we provide additional experimental results at https://anonymous.4open.science/r/rebuttal2-534E/rebuttal2.2.png. The data demonstrate the stability of our algorithm’s performance. We will include a comprehensive distributional analysis (e.g., median, quartiles) in the final version. ## **Claims And Evidence 3** - To clarify, we use the SOR-preconditioned second-order elliptic PDE experiment as an example: - As stated in Line 795 (Page 15), the input parameter α for SymMAP is sampled uniformly at random from its defined interval. - In this experiment, the average discrepancy between the predicted symbolic ω and the ground truth is **0.03**, indicating no memorization or overfitting. ## **Claims And Evidence 4** #### Generalization Across PDE Parameters If the matrix properties remain stable under parameter variation, the learned symbolic expressions can generalize beyond the training range. Otherwise, as noted, performance may degrade. To validate this, we conducted additional experiments: - **Settings:** Equation: Second-order elliptic, grid size 40,000. Test range: α ∈ (−2, 2), coupling term ∈ (−0.5, 0.5) (vs. training range: α ∈ (−1, 1), coupling term ∈ (−0.01, 0.01)). Tolerance: 1e-3. - | None | PETSc default 1 | Fixed constant 0.1 | Fixed constant 1.9 | Optimal constant | SymMaP | | ----- | --------------- | ------------------ | ------------------ | ----------------- | ------ | | 16.98 | 4.04 | 1.26 | 15.2 | 0.94 | 0.86 | - The results confirm strong generalization. We will include a detailed analysis in the final version. #### Generalization Across PDE Types and Discretization Schemes As correctly pointed out, SymMAP searches for mappings from input parameters to optimal preconditioner parameters. Changing the PDE type or discretization scheme alters the optimal parameters, limiting direct generalization. However, our interpretable expressions reveal that optimal parameters depend on specific matrix features (e.g., sparsity, diagonal dominance). In future work, we aim to generalize SymMAP by directly ingesting matrix properties, enabling adaptation to diverse PDEs and discretizations. ## **Experimental Designs Or Analyses 2** > Moreover, trajectories of iterations could be provided for samples of PDEs. We include iteration trajectories for the SOR-preconditioned biharmonic equation https://anonymous.4open.science/r/rebuttal2-534E/rebuttal2.1.png, demonstrating SymMAP’s superior convergence. A full analysis will be added to the final version. ## **Essential References Not Discussed** > The paper fails to consider adaptive methods that tune ω as the residual Ax(k)−b shrinks over iterations. For instance [1]. We appreciate this insightful suggestion. The work in [1] presents an elegant approach for adaptively adjusting the SOR relaxation parameter ω based on residual reduction. We will explore further in future work to potentially integrate with our framework, and we will ensure proper citation. However, we note a fundamental distinction between the application scenarios: 1. **SOR as a Standalone Solver (as in [1]):** • This is a fixed-point iteration method where optimal ω can be derived theoretically (albeit computationally expensive). 2. **SOR as a Preconditioner (our focus):** • Here, SOR assists Krylov subspace methods (e.g., GMRES, MINRES, CG). The optimal ω lacks theoretical guidance and depends on the solver choice (e.g., GMRES-preconditioned ω ≠ CG-preconditioned ω). • Our grid-search-based approach addresses this gap, as no existing adaptive methods target preconditioner tuning in this context. Thus, while [1] is highly relevant to SOR solvers, its direct applicability to preconditioned Krylov methods remains limited. ## **Other Comments Or Suggestions** > There are typos in Figure 3. - We apologize for this oversight and will correct the typographical errors in the final manuscript. [1] Adaptive SOR methods based on the Wolfe conditions, 2019, https://dl.acm.org/doi/10.1007/s11075-019-00748-0
null
null
null
null
null
null
Online Learning in Risk Sensitive constrained MDP
Accept (poster)
Summary: The paper develops the first sublinear regret bound for the risk sensitive CMDP problems. In contrast to the classical CMDP, an additional risk measure is put on the left hand side of the constraint value, and it is required that the final value is greater than a threshold. The paper develops the clever idea of introducing an augmented variable to denote the budget constraint and the optimizing over the minimax formulation of the CMDP, with an additional augmented state. Finally, the sublinear regret has been developed for the objective value, as well as the constraint. Claims And Evidence: The claims look good to me. Methods And Evaluation Criteria: The method makes sense but the numerical experiment is lacking. Theoretical Claims: The claims and proofs look correct to me. Experimental Designs Or Analyses: There is no numerical experiment of the paper. Supplementary Material: The supplementary material has been gone through. Relation To Broader Scientific Literature: The paper develops the first sublinear regret for the risk-sensitive CMDP. Essential References Not Discussed: The algorithm developed in the paper is based on the Lagrangian formulation, where the primal-dual algorithmic framework is widely developed to solve it. However, primal-based algorithms have also been developed to solve the CMDP problems. For example, the primal-based algorithm has been developed in [1], [2], [3], [4], and [5]. In particular, the paper [6] develops resolving primal LP methods to solve the CMDP and achieves the first instance-dependent $\tilde{O}(1/\epsilon)$ sample complexity. I think these papers are worth mentioning in the literature. References: [1]. Yongshuai Liu, Jiaxin Ding, and Xin Liu. Ipo: Interior-point policy optimization under constraints. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 4940–4947, 2020. [2]. Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research, 18(167):1–51, 2018. [3]. Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. Lyapunov-based safe policy optimization for continuous control. arXiv preprint arXiv:1901.10031, 2019. [4]. Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval Tassa. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018. [5]. Tengyu Xu, Yingbin Liang, and Guanghui Lan. Crpo: A new approach for safe reinforcement learning with convergence guarantee. In International Conference on Machine Learning, pages 11480–11491. PMLR, 2021. [6]. Jiang, Jiashuo, and Yinyu Ye. "Achieving $\tilde{O}(1/\epsilon)$ Sample Complexity for Constrained Markov Decision Process." NeurIPS, 2024. Other Strengths And Weaknesses: Strength: The paper develops the clever idea of considering the augmented state space by letting a state variable to denote the budget. The paper achieves the first sublinear regret for this type of risk-sentitive CMDP. Weakness: 1. The current method applies to the case with a single-constraint. Though the authors claim that there is a way to extend to multiple constraints, it is not immediately clear to me and I suspect the current approach would make the computation time exponential in the number of constraints. Please refer to the questions below for more details over this weakness. 2. It is not clear how the developed algorithm works in practice due to the lack of numerical experiments. 3. The computation complexity of their algorithm has not been discussed. Other Comments Or Suggestions: Please refer to the questions below. Questions For Authors: 1. It is discussed in the extension that the current approach could extend to the multiple constraint case. However, it seems that the current approach optimizes over the augmented variable $\tau$ by discretizing the range. If there are multiple constraints, will it be the case that a multi-dimensional space would need to be discretized? This will make the computation time exponentially large. 2. Is there any intuition on the lower bound of the regrets for the risk-sensitive CMDP? 3. Could you please provide any comment on the practical performance of the algorithm developed in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing thoughtful comments. Please see our responses to your questions below. >*For multiple constraints* * As correctly pointed out by the reviewer, the complexity of our approach is affected by the number of constraints. Specifically, extending the method proposed in the paper to multiple constraints requires augmenting the state with multiple budget variables, which leads to an exponential increase in the state space. For $M$ constraints, the size of the augmented state space scales as $ (\frac{C}{\epsilon})^M $, where $C$ is the upper bound of the augmented budget $\tau$. We thank the reviewer for highlighting this issue, and we will update the statement in the paper accordingly. >*Lack of numerical experiments* * Our main contribution is theoretical in nature. To the best of our knowledge, this is the first work that establishes sublinear regret and constraint violation bounds in the setting where the goal is to maximize the expected cumulative reward subject to an entropic risk constraint on the utility. * To address this, we leverage the augmented MDP representation of OCE-based risk measures, which includes the entropic risk measure as a special case. This idea originates from [Bäuerle and Ott, 2011], where the authors studied the memory requirements for solving CVAR in the absence of a Bellman equation, and was further developed in [Wang et al., 2024] to reduce OCE problems to standard reinforcement learning. * Our work is the first to tackle the CMDP setting with an entropic risk constraint, using the augmented MDP framework to address the nonlinearity in optimizing the Lagrangian, and defining a composite value function. * Note that even though we use an augmented budget, we do not need to augment all the history, rather, we only need to augment the remaining budget from the initial value. Also, since we assume the utilities are deterministic, the transition of the augmented budget is known. Thus, we expect that our approach can be implemented in practice. We will add numerical results in the final version of the paper. >*Computational Complexity* * The computational complexity is O(K), i.e. a linear in $K$, and thus, our approach is computationally efficient. In particular, if we discretize $\tau$ with spacing $1/K$, then this will add an additional $H/K$ factor in the regret bound (as the OCE representation for entropic risk measure is 1-Lipschitz) over every episode that results in $O(H)$ regret overall which is independent of $K$. Thus the maximization over $\tau$ can be done in $O(K)$ time, which is linear with $K$, hence, efficient. We have discussed the computational complexity aspect in Section 4 (please see the paragraph just before Section 5). Note that the augmented budget is also considered in the unconstrained entropic risk or CVaR maximization problem [Wang et al.'2023,2024]. They also discretize the augmented budget and the computational complexity is also of the same order as ours. >*Lower Bound* * Regarding regret lower bounds for CMDP, there are currently no known results for cumulative reward and utility, making this a largely unexplored area. The presence of an entropic constraint further complicates the analysis due to the absence of strong duality. >*Regarding the references* * Thanks for pointing out the references. These are really interesting, and we will include them in the final version. In the following, we recap our contributions to the other works. Note that to the best of our knowledge, this is the first work that achieves sublinear regret ($O(\sqrt{K})$) and sublinear violation bound ($O(K^{3/4})$) for risk-constrained RL problem. Chow et al.'2018 did not consider the regret and violation bound. In order to achieve this, we contribute significantly. We have to use a regularized primal-dual approach, unlike the risk-neutral CMDP setting, as the strong duality does not hold. Further, we cannot apply dynamic programming-based approach to the composite state-action value function because of the non-linearity of the value function with respect to the state-action occupancy measure, and we have to resort to the OCE representation. In the OCE representation, we have to augment the state with a budget, and then we optimize over the budget. * Some studies also use an LP-based approach to bound the regret and the violation in the risk-neutral setting. They use the state-action-occupancy measure rather than a policy to optimize. However, the state-action occupancy-based measure (and thus, the LP-based approach) will not work for our problem as the value function is not linear in terms of the state-action occupancy measure.
Summary: The paper studies online learning in episodic finite-horizon constrained Markov Decision Processes with entropic risk-sensitive constraints. Traditional primal-dual methods fail to directly address risk-sensitive CMDPs due to the non-linear nature of entropic risk constraints and the lack of strong duality. The authors propose augmenting the CMDP by incorporating a budget variable into the state-space, allowing the use of value iteration. They then introduce a primal-dual algorithm with regularized dual updates and prove the first known sublinear regret and violation bounds for risk-sensitive CMDPs. Claims And Evidence: - The major claims of achieving sublinear regret and violation bounds for the risk-sensitive CMDP setup are well-supported by theoretical arguments and detailed proofs. - The paper claims that the augmented CMDP allows for a tractable solution to the risk-sensitive constraint, but the paper does not provide a computational complexity analysis. Methods And Evaluation Criteria: The introduction of an augmented CMDP framework to overcome the challenge of nonlinear entropic risk measures is inspired by (B¨auerle & Ott, 2011; Wang et al., 2024), which is sensible. While the evaluation criteria, specifically regret and constraint violation, seems quite standard for the considered setting, I do have some questions regarding the learning metric. Does the metric is defined over the admissible/feasible policy set? If not, then some policy may achieve negative violation. Should we take the positive part of violation then? Another concern regarding evaluation is that the paper lacks empirical evaluation, including simple numerical experiements. Theoretical Claims: I checked the main theoretical claims, specifically the correctness of Lemmas 3.1 and 4.1, which the value-function equivalences and existence of Markovian optimal policies. No significant issues were identified; the derivations appear mathematically sound and clearly presented. Experimental Designs Or Analyses: The paper is purely theoretical, and no empirical experiments are presented. While its absence does not detract from the theoretical contributions, numerical validation could strengthen the paper. Supplementary Material: I skimmed the supplementary material thoroughly. These sections provide well structured and rigorous proofs. Relation To Broader Scientific Literature: The paper situates its contribution within the existing literature on CMDPs and risk-sensitive RL. It dentifies gaps related to nonlinear risk constraints and appropriately builds upon recent advances (Wang et al., 2023; Ding & Lavaei, 2023) and extends primal-dual methods but adapts them to risk-sensitive settings. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - First rigorous theoretical analysis of CMDPs with entropic risk-sensitive constraints. - Clear theoretical exposition and solid analytical framework. Weaknesses: - Lack of numerical examples or empirical validation - The dual regularization strategy is somewhat heuristic; further justification or exploration of alternatives would strengthen the approach. Other Comments Or Suggestions: - A more detailed discussion on why entropic risk measures, rather than CVaR or other risk measures, are particularly valuable would help clarify the practical implications. In contrast, CVaR or VaR-constrained formulation is well-known and well motivated - There are also some typos throughout the paper. Need proofreading. Questions For Authors: - Can the authors discuss the implications of regularization parameter in more detail, including its choice and sensitivity? - Regarding the regret bound in section 5.1, the lower bound result for ERM-MDP is improved and proven tight in Liang and Luo 2024, in contrast to Fei et al. 2021, which is not discussed. While it might be challenging, is it possible to derive or adapt lower bounds to this setting? How tight are your proposed upper bounds likely to be compared to these potential lower bounds? - The author should also carefully discuss the worsening of bounds compared with risk-neutral CMDP in terms of SAKH, rather than K only. - Following the idea of (Wang et al., 2023,2024), it seems to be possible to directly extend the whole framework from ERM-constrained setting to OCE-constrained setting. Fei, Y., Yang, Z., Chen, Y., and Wang, Z. Exponential bellman equation and improved regret bounds for risksensitive reinforcement learning. In Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS ’21 Wang, K., Kallus, N., and Sun, W. Near-minimax-optimal risk-sensitive reinforcement learning with cvar. In International Conference on Machine Learning, pp. 3586435907. PMLR, 2023. Wang, K., Liang, D., Kallus, N., and Sun, W. Risk-sensitive rl with optimized certainty equivalents via reduction to standard rl. arXiv preprint arXiv:2403.06323, 2024. Liang, H., & Luo, Z. Q. (2024). Bridging distributional and risk-sensitive reinforcement learning with provable regret bounds. Journal of Machine Learning Research, 25(221), 1-56. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing thoughtful comments. Please see our responses below. >*Lack of numerical results* * Our main contribution is theoretical. To the best of our knowledge, this is the first work that establishes sublinear regret and constraint violation bounds in the setting where the goal is to maximize the expected cumulative reward subject to an entropic risk constraint on the utility. * To address this, we leverage the augmented MDP representation of OCE-based risk measures, which includes the entropic risk measure as a special case. This idea originates from [Bäuerle and Ott, 2011], where the authors studied the memory requirements for solving CVAR in the absence of a Bellman equation, and was further developed in [Wang et al., 2024] to reduce OCE problems to standard reinforcement learning. The OCE representation is necessary since the standard dynamic-programming-based approach does not apply to the composite state-action value function. We will add numerical results in the final paper in Appendix. >*Dual regularization..* * Minimizing the regularized Lagrangian $V_r^{\pi} + \lambda (V_g^{\pi} - b) + \beta \lambda^2$ with respect to $\lambda$ yields $V_r^{\pi} - \frac{1}{4\beta}(V_g^{\pi} - b)^2$ if $V_g^{\pi} - b < 0$, and $V_r^{\pi}$ otherwise. Thus, adding the regularization is similar to minimizing the $\ell_2$ loss of the constraint violation, which facilitates bounding the violation term. In particular, we show that this regularization leads to an $O(K^{3/4})$ bound on the cumulative violation. >*Choice of ERM rather than CVaR..* The entropic risk measure prioritizes policies with desirable robustness and performance trade-offs. Specifically: * **Robustness:** The entropic risk measure is connected to robustness. It has been shown that maximizing the entropic risk measure with a risk sensitivity parameter $\alpha$ is equivalent to optimizing the worst-case expected return under a distributional ambiguity. More precisely, this corresponds to maximizing the minimum performance over a set of distributions within a KL ball of radius $\alpha$ around the nominal distribution. * **Dynamic Programming:** Unlike CVaR or VaR, the entropic risk measure is smooth and satisfies a multiplicative form of the Bellman equation. This makes it particularly amenable to gradient-based optimization algorithms. * **Sensitivity to the Risk Parameter:** As the risk factor $\alpha$ approaches infinity, the entropic risk measure increasingly emphasizes adverse outcomes. However, unlike CVaR, which explicitly focuses on tail risk, it still retains sensitivity to the entire distribution of outcomes. >*Implications of the regularization term* * The added regularization ensures the boundedness of the violation, which is essential for our analysis and the derivation of both the constraint violation and the regret bounds. As the regularization parameter increases, the algorithm increasingly prioritizes smaller values of $\lambda$, eventually rendering the resulting algorithm ineffective. Therefore, a small choice of $\beta$ is necessary for maintaining good performance. Specifically, as stated in Lemma 5.2, there exists a trade-off between the step-size and the regularization parameter, i.e., $\beta \eta$ must be smaller than 0.5. >*Regarding lower bound* * We thank the reviewer for mentioning the work of [Liang and Luo, 24]. Extending the results of [Liang and Luo, 2024] would be quite challenging, but represents an interesting future direction. Specifically, the distributional RL framework for entropic risk follows a similar multiplicative Poisson equation as the one we study and thus encounters similar challenges to those discussed in our paper. * Regarding regret lower bounds for CMDP, there are currently no known results, making this a largely unexplored area. The presence of an entropic constraint further complicates the analysis due to the absence of strong duality. Nonetheless, one might be able to extend the general ideas from [Liang and Luo, 2024] by adopting an augmented MDP approach instead. We agree this is a promising direction and have added a sentence discussing it in the paper. Our regret bound is the same as that of the risk-neutral CMDP. >*Extending other OCE representations* * The main focus of this paper has been to extend the CMDP framework to the entropic risk minimization setting. To tackle this problem, we leverage the augmented MDP representation introduced by [Wang et al., 2023, 2024]. This approach, however, comes at the cost of discretizing the parameter $\tau$, for which we currently do not have a complete solution. We also acknowledge that the proposed framework can be generalized to a broader class of risk measures that admit an OCE representation. This extension can build upon the ideas of [Wang et al., 2024] and our current analysis. However, such a generalization is non-trivial and involves additional technical challenges that constitute future research direction. --- Rebuttal Comment 1.1: Comment: Thank the authors for their detailed response and clarifications, which address most of my concern. I maintain my positive evaluation for this paper. --- Reply to Comment 1.1.1: Comment: We are glad that our responses have clarified most of your concern, and would like to thank you for your support. Could you please consider raising the score?
Summary: This work focuses on Risk-Sensitive Constrained MDPs and proposes a novel algorithm that ensures the entropic risk for an additional utility function remains above a given threshold. Under this setting, the author introduces a new algorithm based on the primal-dual method for an augmented MDP. Additionally, the author provides the first sub-linear regret guarantee for the reward in this setting, along with a non-optimal violation magnitude for the constraints. Claims And Evidence: The author provides a clear claim of the result in the theorems and includes a proof sketch to outline the key steps of the theoretical analysis. Methods And Evaluation Criteria: The main contribution of this work focuses on the theoretical analysis of regret and does not have experiment. Theoretical Claims: The novel algorithm combines the augmented method with the primal-dual method, where the augmented method is widely used in risk-sensitive reinforcement learning, and the primal-dual approach is commonly applied in constrained MDPs. Therefore, the algorithm is well-motivated. The author provides a clear proof sketch, and there are no concerns about correctness based on the proof sketch. Experimental Designs Or Analyses: The main contribution of this work focuses on the theoretical analysis of regret and does not have experiment. Supplementary Material: No, due to time limitations, I only reviewed the main paper and did not check the supplementary material. Relation To Broader Scientific Literature: This work mainly focuses on risk-sensitive constrained reinforcement learning; however, the proposed algorithm is highly computationally inefficient, making it primarily relevant for the theoretical analysis of reinforcement learning rather than practical applications. Essential References Not Discussed: This paper provides a comprehensive discussion of related work in risk-sensitive reinforcement learning and constrained reinforcement learning. Other Strengths And Weaknesses: 1. The proposed algorithm is highly computationally inefficient due to the augmented feature of the budget variable. Specifically, in tabular MDPs, the number of state-action pairs is finite, allowing for efficient value function updates. However, the augmented budget variable can take any real value, making the value function update (Lines 13-14) computationally expensive, as it must be performed over a continuous range of budget values. Additionally, the optimization of $\tau$ in Line 16 may also be inefficient, especially if the value function lacks useful properties such as convexity. A discussion on potential computational improvements or approximations would be beneficial. 2.The algorithm appears to be a direct combination of the existing primal-dual method used in constrained RL and the augmented method used in risk-sensitive RL. As a result, the novelty of the approach is limited. 3. It is not clear why violation is an appropriate measure for constraint RL. Specifically, in a strictly constrained setting, the utility should be above the threshold in each episode, rather than being averaged over multiple rounds. Even in a soft-constrained setting (such as in classification problems), a more natural approach would be to use a truncated violation measure like $\max(0, B - V_g)$ in each round, preventing a scenario where a round with high utility compensates for rounds with low utility. Furthermore, even if such compensation is allowed, given that the author considers a risk-sensitive utility function, it would be more reasonable to introduce an entropy-based loss structure for the compensation process rather than using a linear summation, which is more suitable for risk-neutral cases. Other Comments Or Suggestions: 1. It is not clear why violation is an appropriate measure for constraint RL. Specifically, in a strictly constrained setting, the utility should be above the threshold in each episode, rather than being averaged over multiple rounds. Even in a soft-constrained setting (such as in classification problems), a more natural approach would be to use a truncated violation measure like $\max(0, B - V_g)$ in each round, preventing a scenario where a round with high utility compensates for rounds with low utility. Furthermore, even if such compensation is allowed, given that the author considers a risk-sensitive utility function, it would be more reasonable to introduce an entropy-based loss structure for the compensation process rather than using a linear summation, which is more suitable for risk-neutral cases. 2. In this work, the author considers a deterministic reward. In the traditional RL framework, it is natural to transform the reward into its expectation, and unbiased noise does not significantly affect the learning process, as most challenges arise from learning the transition dynamics. However, it is unclear whether the assumption of a deterministic reward is reasonable in a risk-sensitive environment. Will this assumption have further implications for the entropy-type value function? Questions For Authors: 1.In the learning matrix, why consider a linear summation for the compensation process rather than an entropy-based loss structure, which is more aligned with risk-sensitive settings? 2. 1. deterministic reward Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for providing thoughtful comments. Please see our responses below. >*Regarding the computational efficiency...* * *Even though the augmented budget can take value in real space, the computational complexity is still O(K), i.e. a linear in K. Thus, it is **not true** that our approach is computationally inefficient.* In particular, if we discretize $\tau$ with spacing $1/K$, then this will add $H/K$ factor in the regret bound (as the optimized certainty equivalent (OCE) representation for entropic risk measure is 1-Lipschitz) over every episode that results in $O(H)$ additional regret overall which is independent of $K$. Thus the maximization over $\tau$ can be done in $O(K)$ time which is linear with $K$, hence, efficient. We have discussed the computational complexity aspect in Section 4 (please see the paragraph just above Section 5). Note that the augmented budget is also considered in the unconstrained entropic risk or CVaR maximization problem [Wang et al'23], and the computational complexity is also of the same order as ours. * While the augmented budget can take real value, the constrained entropic risk-measure problem is inherently challenging. *In particular, one cannot apply the dynamic programming-based approach to the Lagrangian or the composite state-action value function, unlike the risk-neutral CMDP approach.* Note that unconstrained entropic risk-sensitive RL admits an optimal Bellman equation and one can directly apply the dynamic programming-based approach there without augmentation. However, we cannot extend those approaches here. Hence, we need to resort to the OCE representation of the entropic risk measure. As a result, we augment the state space with the budget and then solve for the optimal budget. >*..the novelty of the approach is limited.* In the following, we state the novelty of our proposed approach. * Our study shows that in the constrained entropic risk-sensitive RL problem one cannot apply the dynamic programming-based approach on the composite state-action value function, unlike the risk-neutral CMDP approach. Hence, we need to resort to the OCE representation of the entropic risk measure. The key here is that we can write the Bellman equation there for the composite state-action value function in the OCE representation. Hence, one can obtain an efficient computation approach. * The additional challenge comes from the fact that the entropic risk measure is not linear in the state-action occupancy measure even with the augmented state-spaced. Hence, the traditional primal-dual-based approach cannot be applicable to bound the violation, unlike the risk-neutral CMDP setup. We resort to the regularized primal-dual-based approach. * Further, for the unconstrained augmented MDP problem, one uses the fact that the greedy policy is optimal with respect to the augmented problem to achieve the bound. However, in the constrained setting, the greedy policy with respect to the composite state-action value function (in the augmented state space) might not be feasible and, hence, might not be optimal. Hence, we have to use novel proof techniques. * Overall, to the best of our knowledge, this is the first result that shows that $O(\sqrt{K})$ (sublinear) regret and the $O(K^{¾})$ (sublinear) violation bound is achievable in the constrained risk-sensitive setting. In order to achieve this, we have to identify and apply tools in a novel manner and this will influence many new approaches in the future. >*Other violation metric..* * The violation metric we consider is common in the risk-neutral CMDP literature. Note that even for this violation metric, the traditional primal-dual-based approach that can bound the violation for the risk-neutral setting is not applicable here since the strong duality may not hold. Hence, bounding this violation metric is challenging in our setting. * We agree with the reviewer that the truncated violation like $\max(0,B-V_g)$ might be a better alternative for online learning. However, how to achieve optimal regret along with the truncated violation in a computationally efficient manner using a primal-dual approach still remains open *even in the risk-neutral CMDP*. The recent work [A1] achieves $O(\sqrt{K})$ regret and $O(\sqrt{K})$ *truncated* violation in the risk-neutral CMDP using a double loop technique. However, the computational complexity of the proposed approach is exponential in terms of $H$ and $K$. We can use a similar technique to bound the truncated violation in our setting. However, the computational complexity will still be exponential. We will mention the above in the final version. Because of the above reason, we did not explicitly consider the truncated violation metric. How to develop a computationally efficient approach to bound both the regret and the truncated violation has been left for the future. [A1]. Ghosh et al. "Towards achieving sub-linear regret and hard constraint violation in model-free rl." AISTATS,2024. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I still have a concern regarding the violation metric. According to the authors, it seems acceptable to allow non-truncated violations in each round. However, as I mentioned in the “Weaknesses and Questions” section, a main concern remains: why is the violation metric defined as the summation of violations across stages? While linear summation is standard in risk-neutral environments for both rewards and violations, it may not be appropriate in risk-sensitive settings. For example, a risk-sensitive user would clearly prefer a trajectory with violations of (0, 0) over (1, -1), even though their sums are equal. This suggests the current metric may fail to capture meaningful distinctions in risk-sensitive scenarios. Furthermore, even if some form of compensation across rounds is allowed, using a linear summation seems misaligned with the stated risk-sensitive objective. It would be more appropriate to incorporate an entropy-based or utility-weighted loss structure that better reflects risk sensitivity. Therefore, I will keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. In the following, we address your comment. >*..a main concern remains: why is the violation metric defined as the summation of violations across stages?..* We believe the reviewer may be conflating two related but distinct notions: *constraint violation* and the *risk-sensitive nature of the constraint*. We'd like to emphasize this distinction. The *risk sensitivity* in our setting is captured through the entropic risk associated with the utility function. In contrast, the *constraint* imposes a hard threshold that defines the admissible set of policies, within which we aim to find the optimal one. Importantly, under this formulation, the agent inherently prefers not to violate the constraint at all. Even when considering cumulative utility, the agent would favor $(0, 0)$ over $(1, -1)$, as the latter involves a violation, whereas the former does not. Moreover, the agent has no preference between $(0, 10)$ and $(5, 5)$, as both satisfy the constraint and do not result in any violation. While ideally we would like to measure the total number of time steps at which constraint violations occur (i.e., a binary indicator per round), this formulation is difficult to analyze and optimize over. The next-best alternative is the *truncated violation*, which does reflect constraint satisfaction more faithfully but introduces significant nonlinearity and is also analytically challenging. As we mentioned in our earlier rebuttal, how to minimize the truncated violation metric $\max(0, B - V_g)$ using a computationally efficient approach is still an open question, even for risk-neutral MDPs. Hence, the consideration of such a metric is beyond the scope of this paper. **Instead, we show that although dynamic programming-based approaches and Markovian policies are optimal in the unconstrained case, they are no longer applicable in the constrained case; even when considering the composite state-action value function.** *To address this, we leverage the Optimized Certainty Equivalent (OCE) representation and demonstrate how state augmentation can be used to minimize both regret and violation.* Furthermore, standard primal-dual methods cannot be directly applied here since strong duality may fail to hold; the value function is no longer linear in the state-action value function. To overcome this, we introduce a regularization term and derive bounds for both regret and violation. We believe these insights can serve as a foundation for future work that explores alternative violation metrics in risk-sensitive settings. As a practical compromise in this work, we adopt the *linear (untruncated) violation metric*, which, though weaker, has been widely used and empirically shown in many risk-neutral settings to correlate well with truncated violations. Note that here, the violation means how much the entropic risk measure associated with the policy violates from $B$, here, it does not consider the realized value as the reviewer might be suggesting. Moreover, constraint satisfaction can often be improved by slightly tightening the constraint threshold, e.g., by using $B + \epsilon$ instead of $B$. This approach can yield strong empirical guarantees without sacrificing regret bounds in risk-neutral MDPs, and we believe similar techniques can be extended to the risk-sensitive setting as well. We hope this clarification addresses the reviewer's concerns regarding the metric’s role and justification within our framework.
null
null
null
null
null
null
null
null
Conservative Offline Goal-Conditioned Implicit V-Learning
Accept (poster)
Summary: This paper proposes conservative goal-conditioned implicit V-learning (CGCIVL). The main insight of CGCIVL is to penalize cross-trajectory goal-conditioned values, which may potentially be overestimated, with a conservative regularizer. To improve the empirical performance of CGCIVL, the authors additionally employ other techniques (e.g., quasimetric value functions, hierarchical policy extraction, etc.) from the literature. They evaluate CGCIVL on OGBench, showing that it outperforms the previous methods on navigation environments, including those that require stitching. Claims And Evidence: The claims are empirically supported to some degree, but I do have several questions (see below). Methods And Evaluation Criteria: Their evaluation criteria are reasonable in general, but the tasks are limited to (similar) navigation environments, and it'd have been more convincing if the authors had shown CGCIVL's performance on manipulation environments as well. Theoretical Claims: I briefly reviewed the theoretical results (though I haven't thoroughly gone through the Appendix), and at least they look believable to me. The theoretical results are largely based on standard proof techniques about conservative value estimation. Experimental Designs Or Analyses: I don't have particular concerns about experimental designs or analyses other than the ones I listed in the weakness section below. Supplementary Material: I briefly checked the supplementary material, and confirmed that the authors have submitted their code with (very) brief instructions to reproduce the results. I'd encourage the authors to polish the README file when they release the code to the public. Relation To Broader Scientific Literature: CGCIVL is built upon several existing methods --- IQL, GCIVL, HIQL, CQL, and QRL. Although the "novelty" of CGCIVL is not necessarily extremely prominent, I think the paper does have a reasonable degree of contribution (given that the claims are fully empirically supported). Essential References Not Discussed: I don't see any particular missing work. Other Strengths And Weaknesses: ### Strengths * Figure 5 is quite convincing to me (especially in comparison with Figure 1). It is nice to see that $\alpha < 1$ improves performance on "stitch" datasets with the proposed techniques. * CGCIVL achieves the best performance on almost all tasks employed in the paper. ### Weaknesses * The paper omits a key ablation result -- how does CGCIVL's conservative regularization affect performance? This is the supposed key ingredient of the method, so I believe it is crucial to show how this design choice affects performance. In Figure 4 (in its current form), most of the performance gains are seemingly from quasimetric value functions and hierarchical policy extraction. * The authors only evaluate CGCIVL on maze navigation environments. While the authors employ many datasets from OGBench, it'd have been much more informative if the authors had shown how CGCIVL works on other types of environments as well (e.g., manipulation). Does CGCIVL also work well on OGBench manipulation environments? If not, why? * The authors use more training steps (e.g., 3M) for some challenging tasks (e.g., humanoidmaze), whereas the baseline results are obtained at 1M steps. Is CGCIVL still better than the baselines when they are trained with the same number of epochs? * The proposed method is fairly complicated. It combines a number of different ingredients from previous methods -- quasimetric value functions, hierarchical policy extraction, implicit Q-learning, conservative Q-learning, etc. Hence, to some degree, their method is somewhat expected to work better than the baselines, because the baselines are usually more "atomic" (in the sense that they mostly employ one or two key techniques). While I don't think this is a major limitation, it would have been a great plus if their method had been simpler. Overall, I'm not entirely convinced by the empirical results, mainly due to the lack of ablations and the limited types of environments. I'd be happy to adjust my score if these points are addressed. Other Comments Or Suggestions: * $\mu$ is never formally defined (it is instead somewhat implicitly defined around L180). Relatedly, is $d^{\pi_\beta}$ correct? I suspect $s$ is sampled from the dataset distribution, not $d^{\pi_\beta}$ (note that they are different when dataset trajectories are truncated). * I'd explicitly mention that $V_{\theta_d} \in \mathcal{Q}^-(\mathcal{S})$ around Equation (13). This is not explicitly stated in the current draft. * What is the value of $\beta$ used for the experiments? Questions For Authors: I don't have any questions other than the ones I asked above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable suggestions. We have carefully addressed each of your concerns in the responses below. ### R1: Methods and Evaluation Criteria We have extended our evaluation to manipulation environments (see Table 1 in the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details). Detailed analysis is provided in our response R1 to Reviewer [96R5](https://openreview.net/forum?id=5ryn8tYWHL&noteId=wMyrrzRg7V). ### R2: Supplementary Material We would like to polish the README file to ensure our algorithm can be easily reproduced. ### R3: Weaknesses 1. We have conducted additional experiments comparing CGCIVL’s performance with and without the conservative regularization term (i.e., $\eta\neq 0$ and $\eta=0$ respectively). As shown in **Figure 1** (see the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details), removing this component leads to a significant performance drop, confirming its critical role. Furthermore, we observe that the performance is robust within a suitable range of the regularization coefficient, but both excessively small and large values degrade the results. 2. See our response in R1. 3. In our paper, all algorithms were evaluated under the same number of training steps to ensure a fair comparison. Different from results in OGbench where baselines were trained with 1M steps, we trained all algorithms for longer steps in complicated environments (See Appendix C.3 for details). **Figure 3** (see the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details) shows training curves for all algorithms in these complex environments. 4. We would like to clarify that CQL and quasimetric are the key techniques of our algorithm, addressing value overestimation on unconnected state-goal pairs. IQL is a necessary policy improvement technique, which could be replaced with other methods. The hierarchical structure is a common approach to handle long-horizon tasks and may be omitted in non-long-horizon environments. ### R4: Comments 1. $\mu(g|s)$ denotes an arbitrary distribution which satisfies $\operatorname{supp} \mu \subset \operatorname{supp} p_{m}^{\alpha}$. Trajectory truncation only alters the goal associated with states in different segments, without directly changing the distribution of states in the dataset. Therefore, we can approximate sampling states from $\mathcal{D}$ as sampling from $d^{\pi_{\beta}}$. 2. We'll mention $V_{\theta_d}\in\mathcal{Q}^-(S)$ near Eq. (13) for clarification in the revised manuscript. 3. The parameter $\beta$ in Equations (14)-(15) serves as the temperature coefficient for both the high-level and low-level policy extraction. We empirically set $\beta=3.0$ across all experiments. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I appreciate the additional results, and they look convincing to me. I've raised my score to 3. Two minor comments: * Why does Table 1 in the additional PDF not contain $\texttt{puzzle-3x3}$? In case this result is omitted because CGCIVL doesn't perform better: I believe a new method doesn't necessarily need to achieve the best performance on every single task. It'd be more informative to the community to present the entire result to enable a more holistic evaluation. * $d^{\pi_\beta}(s)$ can be different from $\mathcal{D}$ even without goals, because the former is the discounted state marginal of the policy $\pi_\beta$ (with infinite rollouts), whereas the latter is the truncated state marginal distribution (e.g., consider the extreme case where every trajectory has length 1, in which case $\mathcal{D}$ would be the same as the initial state distribution, while $d^{\pi_\beta}(s)$ isn't). --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. As you suggested, we will include the full tasks in the puzzle environment in the final version of our paper. Regarding the sampling of $s$, I apologize for misunderstanding your point, and your review is correct. $d_{\pi_{\beta}}$ is indeed the state marginal distribution of the dataset and is related to $\pi_{\beta}$. Nevertheless, the conclusion of Theorem 4.1 still holds because $d_{\pi_{\beta}}$ is eliminated during the derivation and is not involved in the final expression.
Summary: This paper introduces conservatism to prevent overestimation in unconnected state-goal pairs and uses a quasimetric value network to prevent underestimation in connected cross-trajectory state-goal pairs. Theoretical analysis is provided for the idealized version of the algorithm, and the practical implementation of the algorithm outperforms offline goal-conditioned RL baselines on OGBench. Claims And Evidence: The performance of the proposed CGCIVL algorithm is demonstrated through abundant experiments, which are convincing evidence. However, the connection between theoretical analysis and the practical CGCIVL algorithm is weak, as the theory is based on an idealized version of CGCIVL (Eq. (8)). The theorems serve therefore more as a motivation than a guarantee. Methods And Evaluation Criteria: Conservatism or regularization is a standard technique in reinforcement learning. The Quasimetric framework serves specifically for the goal-conditioned RL problem. Therefore, the proposed method is overall appropriate for the discussed problem. The benchmark OGBench is suitable for goal-conditioned reinforcement learning. Theoretical Claims: Many notations are used without formal definition, thus hindering the understanding of the theorems along with their proofs. For instance, $\mu(g\mid s)$, $\hat V$ and $\hat{\mathcal{B}}$ in Eq. (8). The formulation of Proposition 4.5 is problematic. As claimed in the proposition, the inequality should hold for any $\epsilon>0$, then this will imply that the $\hat V^\pi(s^-,g)$ has to be $-\infty$. However, this seems to be a minor typo. I believe the theoretical claims are sound after the above issues are addressed. Experimental Designs Or Analyses: Experiments are solid to support the claim. Supplementary Material: No significant problems in supplementary material. Relation To Broader Scientific Literature: The two key components of CGCIVL, conservatism and quasimetric, are not novel in RL literature. The former is standard in RL algorithms e.g. CQL (Kumar et al., 2020), COMBO (Yu et al., 2021), and the latter is also proposed in https://arxiv.org/abs/2304.01203. The paper is only investigating the effect of combining these two techniques. Nonetheless, the successful combination of these two methods reveal the contribution of this paper. Essential References Not Discussed: No missing reference found. Other Strengths And Weaknesses: Strength: The paper investigates the advantage of combining conservatism and quasimetric. Weakness: 1) Both conservatism and quasimetric are existing techniques in RL literature, although this does not severely harm originality, as the methods are tailored specifically for GCRL. 2) Lack of clarity. Many notations are used without formal definition (already discussed above). In addition, the main algorithm (Algorithm 1) needs detailed description. For instance, in Eq. (12), how do we estimate the expectation, and how to sample from $g$ from $p_m^\alpha(g|s)$? Other Comments Or Suggestions: 1. All notations should be defined before using. Many notations are not standard across literatures, so they will cause confusion to readers without clear definition. 2. Algorithm 1 should be described in details to convince the readers that it can be implemented in practice. For instance, we need to discuss how to sample $g$. 3. Both Proposition 4.3 and 4.5 use $s^-$ and $s^+$, but they stand for different meanings in the two propositions. Therefore, we should consider using different notations. Questions For Authors: 1. Does Theorem 4.1 hold for continuous state space or discrete state space? Similarly, does it require discrete action space or continuous action space also holds? This question reflects how the theoretical analysis aligns with practice. 2. It seems unnatural to require the value function to be a quasimetric, because sometimes the ground truth value function might not be a quasimetric. For instance, Suppose states A,B are connected, but both (A,C) and (C,B) are unconnected. Then we should have $V(A,B)=0$, $V(A,C)<0$, $V(C,B)<0$. This violates the quasimetric property. 3. Can CGCIVL fit in the framework of Eq. (8), assuming that we do not use any function approximation? This question determines the relationship between the theory and the practical algorithm. 4. Does Proposition 4.5 still hold if conservatism (regularization) is not added in the algorithm? This is relevant to the novelty of this paper. I am willing to raise the score if the above concerns are resolved and the comments in the previous part are handled. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing our work. We have addressed each concern you raised below. ### R1: Claims and Evidence Thank you for the reviewer’s comment. The theoretical guarantees mentioned in our paper indeed refer to the algorithm prototype based on Eq. (8), rather than the practical algorithm. However, in Lemma 1 (Appendix A), we prove that the expectile regression in the practical algorithm is equivalent to the Bellman operator in Eq. (8). Additionally, compared to Eq. (8), the practical algorithm incorporates two key techniques: 1) hierarchical learning to address long-horizon tasks, and 2) quasimetric distillation to improve the efficiency of value learning. These techniques do not fundamentally alter the core components of the CQL-inspired penalty term and quasimetric, which form the foundation of the theoretical analysis. Therefore, while Eq. (8) does not exactly match the practical algorithm, we believe the practical algorithm still benefits from the theoretical guarantees established by the analysis. ### R2: Theoretical Claims 1. We clarify notations you mentioned as follows: - $\mu(g|s)$ denotes an arbitrary distribution which satisfies $\operatorname{supp} \mu \subset \operatorname{supp} p_{m}^{\alpha}$. - $\hat{V}$ represents an empirical estimate of the true value function $V$ during iteration. - $\hat{B}$ denotes the empirical Bellman operator, which is the sample-based counterpart of the theoretical Bellman operator. 2. Yes. The correct statement of Proposition 4.5 should be: For any $\epsilon>0$ and $\eta>0$, there exists a hyperparameter $\alpha$ such that the inequality holds. ### R3: Relation to Broader Scientific Literature We would like to clarify that our work extends beyond a mere combination of existing techniques. The key contributions are: - Problem identification: To the best of our knowledge, we are the first to formalize the critical issue of value overestimation for unconnected state-goal pairs in offline GCRL. - Feasible solutions: Our solution penalizes the values of all cross-trajectory state-goal pairs while ensuring that values on connected pairs are not excessively under-estimated. We introduce a CQL-inspired regularization term for the first goal and use a quasimetric model for accurate value estimation of connected pairs to achieve the second. Both methods are supported by theoretical guarantees. - Difference with original methods Unlike CQL, which penalizes OOD actions, we introduce a penalty term tailored for state-goal pairs. Unlike QRL, which trains value functions without value iteration, our approach incorporates quasimetric properties into the value iteration process to ensure accurate value estimation for connected state-goal pairs. The novelty of our method lies in re-engineering these components to address a new problem in offline GCRL, rather than merely combining them. ### R4: Weaknesses 1. The originality of our work is discussed in R3. 2. In R2.1, we provide explanations for any undefined notations and will include these details in the revised version of the paper. As defined in Sec 2, in Algorithm 1, states are randomly sampled, and goals are sampled in two ways: 1) $p_{rand}^{\mathcal{D}}(g)$ samples uniformly from all states in $\mathcal{D}$, and 2) $p_m^{\alpha}(g|s)$ samples from the same trajectory as state $s$ with probability $\alpha$, otherwise using $p_{rand}^{\mathcal{D}}(g)$. ### R5: Other Comments or Suggestions 1. See R2.1. 2. See R4.2. 3. Future revisions will implement distinct notation per proposition for clarity. ### R6: Questions 1. Although the proof in the original paper is based on discrete state and action spaces, its key components can also be extended to continuous settings. Non-negative penalty terms for underestimation can be generalized to density-based terms, and concentration bounds for the empirical Bellman operator $\hat{B}^{\pi}$ do not require discretization. The Neumann series ensures that $(I - \gamma P^{\pi})^{-1}$ remains well-defined in continuous spaces when $\gamma < 1$. Therefore, while Theorem 4.1 has not been strictly proven for continuous settings, our algorithm still benefits from the theoretical analysis in continuous environments, as further supported by experimental results. 2. As described in Sec 2 of our paper, the distance between state and goal should satisfies properties of quasimetric (Eq. (6)). However, the value function should exhibit an inverse relationship with the distance away from the goal (Eq. (7)). Thus we have $V(A;B)\geq V(A;C)+V(C;B)$, which is hold when $V(A;B)=0, V(A,C)<0,V(C,B)<0$. 3. Please refer to R1 where we discuss the differences between the practical algorithm and Eq. (8). 4. Proposition 4.5 is based on Theorem 4.1 by incorporating the quasimetric and replacing $\mu$ with the uniform distribution $p_{rand}^{\mathcal{D}}$. Consequently, Proposition 4.5 cannot hold if the conservatism is not included. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It resolves my major concern of novelty, so I've updated my review and raised the score to 3.
Summary: This paper proposes a method for offline goal conditioned reinforcement learning with a penalty term to penalize the value function for unconnected state-goal pairs and does evaluation on OGBench. The results suggest the method outperforms previous methods on goal conditioned tasks. Claims And Evidence: The paper makes several claims. 1. Offline goal conditioned reinforcement learning suffers from value over estimation on unconnected state action pairs. Support for this claim is presented through theoretical analysis in Theorem 3.3 and experimental evidence both in table 1 and figure 3. 2. The method proposed in this paper addresses the value over estimation and achieves better performance. Generally this claim is supported from the main results in Table 1, but it only includes a subset of tasks from OGBench. The evidence would be more convincing if results are shown for all of the OGBench experiments. Methods And Evaluation Criteria: The method and evaluation criteria appears to be well suited for the problem. The proposed method directly addresses the problem and the provides theoretical motivation. The benchmark selection is appropriate. Theoretical Claims: The proofs appear to be correct. Experimental Designs Or Analyses: The experimental design is valid and the selection of benchmark is good. There are ablation studies to back up claims and compares against the other state of the art methods in offline goal conditioned reinforcement learning. The experimental section can be improved by providing results on the entire OGBench. Supplementary Material: The supplementary material provides theoretical analysis of the theorems and experiment detail. Relation To Broader Scientific Literature: The key contributions of the paper is related to the advancement of offline goal conditioned reinforcement learning. It proposes a new method that addresses an important problem in this direction of research. Essential References Not Discussed: Other related works that are essential are included in the paper. Other Strengths And Weaknesses: The paper is original and combines ideas in conservative value estimation to offline goal conditioned reinforcement learning. It is well motivated with its approach and achieves higher performance compared to previous algorithms. A weakness of the paper is that it only compares in maze navigation tasks and is unclear how it would scale to other domains. Other Comments Or Suggestions: It would be helpful to include a mean for each of the environments. Questions For Authors: 1. Is there a reason the method is not run for all of the environments on OGBench? 2. The method has a lot of important hyperparameters. What are the sensitivity to other hyperparameters besides alpha? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback on our work. Please refer to detailed responses below to each of the raised concerns. ### R1: Weaknesses In order to provide more comprehensive evaluation of the performance, we have conducted additional experiments in three manipulation environments (Cube, Scene, Puzzle) \- comprising a total of 8 manipulation tasks of varying complexity. The results, presented in **Table 1** (see the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details), demonstrate that CGCIVL achieves superior performance in all manipulation tasks, particularly in scene and elementary cube tasks, consistent with results in maze environments. We will include these additional experimental results in the revised version of the paper. ### R2: Questions 1. In the original paper, we aimed to validate the algorithm in solving the goal stitching task. However, OGbench currently only provides the stitch dataset in maze environments. Nevertheless, we have supplemented our evaluation with additional experiments in manipulation environments to further verify the algorithm’s performance (See our response in **R1**). We will expand the experimental scope and incorporate these additional results into the final version of the paper. 2. Besides the analysis of $\alpha$, we have also performed sensitivity studies on both the penalty coefficient $\eta$ and the subgoal interval $k$. The analysis of $\eta$ and related experiments can be found in our response R2 to Reviewer [Q24N](https://openreview.net/forum?id=5ryn8tYWHL&noteId=56lsvAsJb7). **Figure 2** (see the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details) shows the the performance of CGCIVL across different subgoal interval sizes. Results indicate that CGCIVL achieves the optimal performance with $k$ between $25$ and $50$. Overly small values of $k$ lead to the "signal-to-noise" issue in the value functions, as identified in the HIQL paper[1], while excessively large values of $k$ makes subgoals difficult to be achieved. ### R3: Other Comments or Suggestions We will include the mean score for each environment in the revised version of our paper to provide more comprehensive results. [1] Park, S. , Ghosh, D. , Eysenbach, B. , and Levine, S. Hiql: Offline goal-conditioned rl with latent states as actions. Advances in Neural Information Processing Systems,36, 2024b.
Summary: This paper proposes an algorithm for goal-conditioned offline RL called Conservative Goal-Conditioned Implicit V-Learning (CGCIVL). CGCIVL improves upon Hierarchical Implicit Q-Learning (Park et al., 2024b) by introducing two techniques. First, it adopts a regularizer similar to CQL (Kumar et al., 2020) to penalize values for unconnected state-goal pairs. Then, based on the observation that a goal-conditioned value function is a pseudometric, it models the value function with Interval Quasimetric Embeddings to prevent over-penalization of values for connected state-goal pairs. CGCIVL outperforms existing baselines on the OGbench(Park et al., 2024a) benchmark containing various goal-reaching tasks. Claims And Evidence: Most of the claims made in the submission are supported by clear and convincing evidence. For those that are problematic, refer to the following sections. Methods And Evaluation Criteria: It is unclear why the authors use $V_{\theta_v}$ instead of the distilled $V_{\theta_d}$ to estimate the advantage functions $\tilde{A}_h$ and $\tilde{A}_l$. Aside from that, the proposed methods and the evaluation criteria make sense for the problem. Theoretical Claims: The $\alpha$ in Proposition 4.3 depends on the choice of the state-goal pairs, which means there might be no $\alpha$ that satisfies the condition for all state-goal pairs. The proposition becomes irrelevant since $\alpha$ is fixed for the entire training process. As Propositions 4.4 and 4.5 are all based on Proposition 4.3, the two propositions are also irrelevant. Experimental Designs Or Analyses: The penalty coefficient $\eta$ also seems to play an essential role in the algorithm, but the authors have not conducted a sensitivity analysis on it. Supplementary Material: I have gone through the proofs in the appendix. Relation To Broader Scientific Literature: The proposed algorithm is mainly based on HIQL(Park et al., 2024b). The penalization term for unconnected state-goal pairs was inspired by CQL(Kumar et al., 2020). The observation that an optimal goal-conditioned value function is a quasimetric was proved by Liu et al. (2023). Finally, the authors modeled their value function using IQE(Wang & Isola, 2022a). Essential References Not Discussed: To the best of my knowledge, the paper has cited all of the essential references. Other Strengths And Weaknesses: Trajectory stitching is necessary for real-world problems because collecting high-quality data is challenging. This paper proposes an interesting method of applying HER for cross-trajectory state-goal pairs. Other Comments Or Suggestions: CQL adds a term to the loss function that maximizes the values for in-distribution data so that the regularizer is canceled out for in-distribution data. Similarly, adding a loss function term that maximizes the values for connected state-goal pairs might be helpful. Questions For Authors: I do not have any additional questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s constructive feedback. Below, we respond to each concern point-by-point. ### R1: Methods and Evaluation Criteria The choice between using $V_{\theta_v}$ or the distilled $V_{\theta_d}$ for advantage estimation ($\tilde{A}^h$ and $\tilde{A}^h$) is flexible, as both approaches are empirically valid, surpassing all baselines. Our experiments confirm that $V_{\theta_d}$ achieves comparable performance when used for policy extraction, suggesting either value function can be adopted without compromising results. We will clarify this point in the final paper. | | pointmaze-large-navigate | pointmaze-giant-navigate | pointmaze-large-stitch | pointmaze-giant-stitch | | ----------------------- | ------------------------ | ------------------------ | ---------------------- | ---------------------- | | CGCIVL(with $\theta_v$) | $92 \pm 4$ | $80 \pm 12$ | $98 \pm 2$ | $81 \pm 17$ | | CGCIVL(with $\theta_d$) | $98 \pm 2 $ | $78 \pm 14$ | $96 \pm 6$ | $82 \pm 15$ | ### R2: Theoretical Claims Thank you for pointing out this question. Here we briefly explain why there exists an $\alpha$ satisfies condition for all state-goal pairs. Proof of Proposition 4.3 demonstrates that for any fixed $\epsilon > 0$ and arbitrary tuple $x=(s^+,s^-,g)$ sampled from dataset, where $(s^+,g)$ is in-trajectory and $(s^-,g)$ is cross-trajectory, there exists an $\alpha_x\in(0,1)$, such that inequality holds when $1>\alpha>\alpha_x$. Consequently, let $\tilde{\alpha}=\sup_x{\alpha_x}$, then we could find a static $\alpha\in (\tilde{\alpha},1)$ , which satisfies the condition for all state-goal pairs. We will provide further clarification in the revised version of the paper. ### R3: Experimental Designs or Analyses As suggested, we have conducted additional ablation studies to analyze the sensitivity of the penalty coefficient $\eta$, and results are presented in **Figure 1** (see the linked [PDF](https://anonymous.4open.science/r/additional-experiment-DDFC/icml_2025_rebuttal.pdf) for details). The results indicate that both excessively small $\eta $ (causing insufficient regularization) and excessively large $\eta$ (over-constraining the optimization) degrade the performance. In practice, we determine the optimal $\eta$ through empirical validation across multiple candidate values. ### R4: Other Comments or Suggestions Maximizing values for connected state-goal pairs is indeed an interesting direction. However, we might need to carefully address several practical considerations: 1) directly sampling from the distribution of connected state-goal pairs in the dataset is difficult, and 2) further analysis is required to establish appropriate theoretical bounds, similar to those in CQL, when incorporating this approach. We plan to thoroughly explore these open issues in future work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, I still have a question to ask. The proof of Proposition 4.3 in the current version of the paper does not seem to mention the existence of a global upper bound $\alpha$ of $\alpha_x$. Could you elaborate on why such $\alpha$ should exist? --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. Proposition 4.3 indicates that for any $x = (s^+, s^-, g)$ , there exists an $\alpha_x$ such that the inequality holds. Furthermore, the conclusion of Proposition 4.3 also holds when $\alpha$ is greater than $\alpha_x$ based on the current proof process. Since the offline dataset is finite, there are only finite combinations of $x$. Therefore, we can select the maximum value from the finite set of $\alpha_x$ as the upper bound. Thus, in theory, we can use a fixed $\alpha$ that is not too small during training to achieve a lower estimate for the value of cross-trajectory state-goal pairs. We will include this clarification in the subsequent version of the paper.
null
null
null
null
null
null
EGPlace: An Efficient Macro Placement Method via Evolutionary Search with Greedy Repositioning Guided Mutation
Accept (poster)
Summary: This paper proposes EGPlace, an evolutionary search-based approach for macro placement. It incorporates the wirelength, congestion, and overlap into its score computation. It achieves better HPWL and faster speed than previous RL-based approaches. ## update after rebuttal After carefully reviewing the rebuttals and comments, I would like to maintain my current score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. There are no issues about the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: The validity of mixed-size placement is questionable. For the comparison between the proposed method EGPlace and analytical DREAMPlace, please see the "Reference Results for Macro Placement" part in https://github.com/limbo018/DREAMPlace. The results by DREAMPlace 4.1.0 are better than the EGPlace. Please discuss these results properly. Supplementary Material: The authors did not provide any supplementary material. I have checked the appendix. Relation To Broader Scientific Literature: The proposed method contributes to the application of ES algorithms into real and complex scene. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. An efficient approach to fast construct the overlap mask between current block and placed blocks. And it is capable to reflect the exact overlap value. **Weaknesses** 1. The overlap rate of circuit ariane with approach MaskPlace, which is shown in Table 4 in the appendix, is 3.27%. However, in the paper of MaskPlace [1], the overlap of "ariane" shown in Table 3 with method "MaskPlace (hard constraint)", is 0.00. Please discuss the inconsistency between the overlap. 2. For the comparison between the proposed method EGPlace and analytical DREAMPlace, please see the "Reference Results for Macro Placement" [2] part in https://github.com/limbo018/DREAMPlace. The results by DREAMPlace 4.1.0 are better than the EGPlace. Please discuss these results properly. In the authors' submission, the results of DREAMPlace is much worse than other baselines, but according to the reference results shown in the github pages of DREAMPlace, EGPlace is worse than DREAMPlace. I think it is very significant to figure out this inconsistency since the gap between DREAMPlace in submission and github is very huge. 3. The overall technical contribution is limited, and the full pipeline is similar to the WireMask-BBO. 4. Other critical objectives are not taken into consideration, such as the post-routing wire length, timing metrics including WNS and TNS. [1] Lai, Yao, Yao Mu, and Ping Luo. "Maskplace: Fast chip placement via reinforced visual representation learning." Advances in Neural Information Processing Systems 35 (2022): 24019-24030. [2] https://github.com/limbo018/DREAMPlace Other Comments Or Suggestions: 1. Macro Placement Results seem to be listed in Table 1 in the main paper, instead of Table 6 in the Appendix. 2. Typo: "araine" -> "ariane" in caption of Table 4. 3. In line 966, maybe it should be "it is 2.8x more efficient than EfficientPlace" instead of "EGPlace"? Questions For Authors: 1. In the calculation of $wirelen_m$ in Eq. 4, the authors claim that $e_p$ is the bounding box center of a net which contains pin $p$. If there are multiple pins containing pin $p$ simultaneously, how to address this case? 2. The same question for the $cong_m$ in Eq. 5, where $E_p$ is the RUDY value of the net containing pin $p$. If there are multiple pins containing pin $p$ simultaneously, how to address this case? 3. How to determine the sequence of module placement? It is better to directly show the sequence, instead of justing writing "larger and highly connected modules first". Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! Below, we provide a detailed response to your questions and comments. If any of our responses fail to sufficiently address your concerns, please inform us, and we will promptly follow up. **The Results of MaskPlace on Circuit Ariane** Thanks for your feedback. We notice that the results in Table 4 of MaskPlace are normalized, so our results for MaskPlace and Chipformer are taken from Table 7 in the Chipformer paper. For further verification, we have run the released code of MaskPlace on the ariane benchmark for 1K iterations with grid size set to 224, and observed an overlap ratio of 3.33%. This suggests that on datasets with large module coverage areas, completely eliminating overlap can be challenging. **Difference Between EGPlace and WireMask-BBO** Although both WireMask-BBO and EGPlace employ a greedy strategy, there are several key differences between the two methods. 1) The evolutionary search variant in WireMask-BBO relies solely on random mutations to explore the search space, whereas EGPlace performs guided mutations to enhance layout quality and improve sampling efficiency. 2), EGPlace is significantly more efficient than WireMask-BBO. WireMask-BBO first applies layout mutation, followed by a greedy genotype-to-phenotype transformation that requires removing and repositioning all macros in the circuit. In contrast, EGPlace introduces a mutation operator that integrates layout adjustment and reconstruction phases, eliminating the computational overhead caused by full layout reconstruction after local modifications. From the perspective of experimental results, EGPlace achieves an average 10.79% improvement in HPWL and a 7.8× speedup in runtime (Table 1), which also verifies the new technical contribution of EGPlace. **Typo Correction** We sincerely apologize for the typographical errors. The macro placement results are listed in Table 1, where EGPlace is 2.8× more efficient than EfficientPlace. We will correct these errors in the revised version. **How to Address the Case where a Pin is Shared by Multiple Pins** A pin serves as an input/output interface of a module, and we do not think that pins contain each other. We assume that your question pertains to how the scores of modules are computed when a pin is associated with multiple nets. Based on this understanding, we provide our response bellow accordingly. In most cases, a pin belongs to only one net, as noted in Appendix A.1 of MaskPlace. In our implementation, we follow the same approach in competitors like MaskPlace, EfficientPlace, and WireMask-EA, by treating each input and output interface of a net in the .nets file as a separate pin. Consequently, if there exist a pin that appears in multiple nets, each occurrence is treated as an independent pin. We sincerely apologize if we have misunderstood your question. Please let us know if further clarification is needed. **How to Determine Module Placement Order** We determine the module placement order in the same manner as MaskPlace. We will include an algorithm for determining the module order into the appendix in the revised version. Specifically, the placement order of a module m is computed considering the size of the module (Size[m]), the number of nets (NodeNetNum[m]), and the number of direct neighbors placed previously (NeiScore[m]). Each time, we select one module and add it into the order list, and then update NeiScore[n] for each neighbor n of m. **Input**: A set of all modules M, a hash table Size storing the area of each module, a hash table NodeNetNum storing the number of nets each module belongs to, an adjacency matrix Adj where modules within the same net are considered as neighbors. α and β are hyperparameters set according to MaskPlace. **Output**: A sequence I that stores M in order. ```text 1. I ← ∅ 2. NeiScore[m] ← 0 for all m ∈ M 3. While I ≠ M DO 4. m ← Argmax(Size[m]*α + NodeNetNum[m]*β + NeiScore[m]) for m ∉ I 5. I ← I ∪ {m} 6. For Each n ∈ Adj[m] DO 7. NeiScore[n] ← NeiScore[n] + 1 8. Return I ``` **Performance of DreamPlace** Please refer to our feedback for Reviewer Nq5X. **Consideration of other PPA Objectives** EGPlace focuses on guided layout generation, including selecting layout candidates from the pool and choosing poorly-placed modules for reconstruction. Currently, the guidance comes from HPWL and other easily-computed metrics. It is not difficult for EGPlace to incorporate final metrics, such as those learned from the mask in LaMPlace (ICLR 2025). We have run OpenRoad to collect PPA metrics on layouts produced by EGPlace using the ariane133 dataset, without considering PPA metrics during the layout generation. As the process is slow, we will report the results when they become available.
Summary: The manuscript proposes EGPlace, an innovative evolutionary optimization framework for macro placement in IC design, introducing a greedy repositioning-guided mutation operator and efficient mask computation algorithm. Experimental results show that EGPlace achieves significant improvements in wirelength reduction and computational speed compared to existing methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: - The main concern is that the method optimizes the macro's wire length. However, the results show that the mixed-size place method can also achieve the best HPWL results. However, the reason is not clearly explained in the paper. - Also, there are works [1, 2] claim that optimizing wirelength does not directly optimize PPA. Therefore, whether it can ultimately improve chip performance remains open to discussion. - The main contributions of this paper lie in the proposal of a novel mask computation method and improvements to the evolutionary algorithm. Its contributions to the ML community are relatively limited. [1] Benchmarking End-To-End Performance of AI-Based Chip Placement Algorithms. [2] Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments cover most placement benchmarks and baselines. However, the experiment lacks a comparison with other analytic methods. Supplementary Material: I have read all parts of the supplementary material. Relation To Broader Scientific Literature: The contributions of this paper seem only applicable to macro placement tasks. Essential References Not Discussed: NA Other Strengths And Weaknesses: While ICML is a prestigious conference in machine learning, its primary focus is on advancing the theoretical and practical aspects of machine learning algorithms and their applications. Evolutionary algorithms, unless tightly integrated with machine learning tasks (e.g., neural architecture search, hyperparameter optimization, or reinforcement learning), might not align well with the core interests of the ICML community. The paper may be more suitable for a domain conference. Other Comments Or Suggestions: NA Questions For Authors: - How to get the results of Table 6 (mixed-size placement)? According to the Dreamplace repo (https://github.com/limbo018/DREAMPlace), the bigblue1 can get 8.62e7. However, all baselines are worse than Dreamplace. - Table 5, MaskPlace get the 9.69 in HPWL for bigblue3. Is there a typo. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! **Regarding Mixed-size Placement** We conduct mixed-size placement following the same approach in Chipformer and EfficientPlace. We first fix all macros placed by EGPlace and apply the global placement step of DreamPlace to position the standard cells. We then set all modules as movable and perform a complete mixed-size placement process using DreamPlace, which includes global placement, legalization, and detailed placement. Note that in this paper, our primary focus is on macro placement, and the mixed-size placement process is not fully optimized, which may be further improved by incorporating standard cell clustering and considering additional surrogate metrics, as suggested in MacroRegulate and LaMPlace. **Regarding PPA Metrics** We fully agree that optimizing the final PPA metrics is the ultimate goal. However, we must note that incorporating final PPA metrics into online optimization is challenging, as obtaining them is time-consuming. EGPlace focuses on guided layout generation, including selecting layout candidates from the pool and choosing poorly-placed modules for reconstruction. Currently, the guidance comes from HPWL and other easily-computed metrics. It is not difficult for EGPlace to incorporate final metrics, such as those learned from the mask in LaMPlace (ICLR 2025). We perform placement using the current EGPlace method on the "ariane133" dataset and evaluate the PPA metrics through OpenRoad. As the process is slow, we will report the results when they become available. **Our Relationship with ML** Module placement is a well-known machine learning problem, and we have included ML baselines for comparison, showing that EGPlace achieves better performance with less running time. We believe that the audience at ICML would have sufficient interest in our work (if accepted). The current version of EGPlace serves as a strong starting point that can be further enhanced by machine learning algorithms. For instance, EGPlace can potentially be combined with the learned masks for PPA metrics improvement from LaMPlace. **Lack of Comparison with Analytic Methods** The results of analytical methods, including NTUPlace3, RePlace, and DREAMPlace, have already been compared with our primary competitors, MaskPlace and EfficientPlace, in their respective papers. Notably, MaskPlace and EfficientPlace have shown superior performance over these baselines. However, we plan to include results from additional competitors in the revised version to make our paper more comprehensive. **Only Applicable to Macro Placement** Similar to some baseline methods, including MaskPlace, EfficientPlace, and WireMask-EA, our work primarily focuses on macro placement, based on the observation that macro placement has a significant impact on layout metrics. Note that EGPlace supports mixed-size module placement. We have conducted mixed-size placement using DreamPlace and reported the results in Table 6 of our manuscript. **Performance of Dreamplace** We conduct mixed-size placement following the same approach in Chipformer and EfficientPlace. We first fix all macros placed by EGPlace and apply the global placement step of Dreamplace to position the standard cells. We then set all modules as movable and perform a complete mixed-size placement process using Dreamplace, which includes global placement, legalization, and detailed placement. For the results obtained by Dreamplace, we report the outcomes in Table 5 of the baseline method EfficientPlace for a fair comparison, where the results are close to those of Dreamplace 4.0.0. However, we observe that Dreamplace 4.1.0 on the mixed-size placement dataset are improved significantly, outperforming EGPlace. As our work primarily focuses on macro placement, we further evaluate the macro placement performance of EGPlace and Dreamplace 4.1.0 on the ISPD2005 dataset. The results in table below indicate that EGPlace outperforms Dreamplace 4.1.0 on 7 out of 8 benchmarks, achieving an average HPWL improvement of 25.1%. ## Table: Comparison to Dreamplace 4.1.0 on Macro Place over ISPD2005. | Method | Adaptec1 | Adaptec2 | Adaptec3 | Adaptec4 | Bigblue1 | Bigblue2 | Bigblue3 | Bigblue4 | |---------------|----------|----------|----------|----------|----------|----------|----------|----------| | EGPlace | 5.75| 37.99 | 60.01 | 54.45| 2.23 | 10.55 | 49.98 |59.73 | | DreamPlace|10.24| 31.14 | 62.63 | 63.78| 6.07| 14.20 | 77.81 |92.51 | **Typo on MaskPlace** We apologize for the typo in Table 5. The HPWL achieved by MaskPlace is 96.91 × 10⁵ rather than 9.69 × 10⁵. We will correct this error in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for highlighting that Dreamplace 4.1 performs better in mixed-size placement scenarios. Since mixed-size placement is more critical than macro placement in our application, I suggest we update Table 6 with these latest results for a more accurate comparison. Regarding macro placement specifically, could you elaborate on the efficiency differences between Dreamplace and EGPlace in terms of runtime? --- Reply to Comment 1.1.1: Comment: Thanks for your valuable feedback! We plan to update Table 6 with the results of DreamPlace 4.1.0 in the revised version. Regarding macro placement on the eight ISPD2005 benchmark circuits, the average runtime of DreamPlace 4.1.0 and EGPlace is 19.8 seconds and 2925 seconds, respectively. This demonstrates that DreamPlace 4.1.0 achieves high computational efficiency compared to EGPlace. We make more explanation on the results: 1. DreamPlace, as an analytical-based method, generally offers significantly higher computational efficiency than other types of approaches. 2. Analytical-based methods have certain limitations in placement quality. For example, they face challenges in handling non-differentiable objectives such as congestion, which can further degrade the final placement quality. Moreover, due to relaxed overlap constraints, modules may heavily overlap during the global placement stage, requiring substantial displacement during the legalization step to resolve overlaps—this often leads to increased wirelength. As shown in the comparison table included in our previous rebuttal, EGPlace outperforms DreamPlace 4.1.0 in terms of placement quality, **achieving better results on 6 out of 8 data set**. While EGPlace takes approximately one hour to complete macro placement, we believe that this runtime is a **reasonable trade-off** for achieving significantly better results. 3. As demonstrated in Table 1 of the manuscript, EGPlace provides much higher efficiency compared to state-of-the-art reinforcement learning and stochastic-based methods, achieving a **2.8× speedup over EfficientPlace and a 7.8× speedup over WireMask-EA**. 4. Additionally, please note that DreamPlace is primarily implemented in **C**, while EGPlace is implemented in **Python**, which is an interpreted language and generally much slower in execution speed. For mixed-size placement, as mentioned in our previous rebuttal, EGPlace still has room for improvement. We expect that with further enhancements, the method could achieve even better placement results. **Regarding the PPA evaluation results:** We appreciate the comments and concerns raised by **Reviewer nQ5x** and **Reviewer pQfq** regarding the evaluation of PPA metrics. We would like to respond to their points below. We attempted to conduct macro placement on the ariane133 dataset supported by OpenROAD by selecting the 256 largest modules. Following the two-stage flow described in Appendix D5 of the manuscript, we performed mixed placement using DREAMPlace and then carried out PPA evaluation using the code provided in ChipBench. During the experiments, we encountered several challenges: 1. The experimental flow was relatively complex, involving both EGPlace and DREAMPlace for placement, as well as conversions between LEF/DEF and Bookshelf formats. We had to carefully ensure data consistency throughout the process,and handling dataset format conversions was a new challenge that was not encountered in our previous experiments. 2. The files generated from our early-stage filtering and conversion caused issues during DREAMPlace legalization. We spent several days identifying and resolving this problem. 3. We evaluate PPA using ChipBench on a server with Intel Xeon Silver 4210R CPU (2.40GHz). The evaluation process is extremely time-consuming, sometimes taking over 24 hours. This resulted in long waiting times for results and made it difficult to quickly debug and adjust errors due to the slow feedback. Consequently, a considerable amount of time is needed to complete this experiment. The program is currently running and has reached the detailed routing stage. If it completes successfully and the results become available, we will present the subsequent experimental outcomes via link https://anonymous.4open.science/r/EAPlace-31A4. Until now, the output log (last 10 lines) of the PPA evaluation program we are currently running is as follows: [INFO DRT-0194] Start detail routing. [INFO DRT-0195] Start 0th optimization iteration. Completing 10% with 12097548 violations. elapsed time = 01:31:38, memory = 34433.00 (MB). Completing 20% with 21157172 violations. elapsed time = 02:45:07, memory = 37354.73 (MB). Completing 30% with 22915019 violations. elapsed time = 03:20:10, memory = 37285.90 (MB). Completing 40% with 35208776 violations. elapsed time = 05:28:14, memory = 45461.79 (MB).
Summary: This article presents a new mutation operator for evolutionary algorithms designed for the problem of macro placement. The new operator, the Greedy Repositioning Guided Mutation, constructs a set of good placements for a module and then randomly selects one. Compared to a traditional mutation operator, it therefore encourages good placement of the module, while still allowing for search. This mutation operator is combined with a standard evolutionary algorithm and tested on a standard benchmark in macro placement. Claims And Evidence: The main claim is that the introduced mutation operator improves search for chip configurations. This is demonstrated clearly by the comparison between the proposed method EGPlace and a similar evolutionary method WireMask-EA, as well as other baseline methods. Methods And Evaluation Criteria: The proposed method is a mutation operation for the problem of macro placement. In that context, it is appropriately evaluated, however in the "Experimental Designs Or Analyses" section, I provide some improvements to the experimental analysis which would strengthen the claims. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments appear well designed, using the ISPD2005 dataset and the Ariane RISC-V CPU benchmark. The method is compared against 4 baseline methods; the EGPlace method consistently improves over other methods. There are a few points of improvement: + ICCAD2015 is a more recent benchmark than ISPD2005 - what is the motivation for ISPD2005? + The number of baseline methods included is limited. Wiremask-EA compares with SP-SA, NTUPlace3, RePlace, DREAM- Place, and Graph placement, DeepPR, MaskPlace. It would be especially helpful to include baselines from the RL literature, as this represents a significant part of the literature. + The results presented in Table 1 could be improved through statistical analysis; best results are marked in bold, but it is not indicated if they are significant, nor for how many independent trials the comparison is performed. + Code is not included. Supplementary Material: The supplementary materials present additional experimental results and a useful explanation of the differences between the proposed method and a similar evolutionary method, WireMask-EA. Relation To Broader Scientific Literature: The article places itself well in the literature on the macro placement problem. It does not engage as fully with the evolutionary literature. For example, visualizing the different search trajectories taken by EGPlace compared to WireMask-EA would be a useful demonstration of the benefits of this new mutation operator. Essential References Not Discussed: Geng, Zijie, et al. "LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement." The Thirteenth International Conference on Learning Representations. Ochoa, Gabriela, Katherine M. Malan, and Christian Blum. "Search trajectory networks: A tool for analysing and visualising the behaviour of metaheuristics." Applied Soft Computing 109 (2021): 107492. Other Strengths And Weaknesses: The article positions the contribution as a "novel evolutionary framework," however the main contribution is the mutation operator, as confirmed by the ablation study. At what point could other parts of the evolutionary algorithm be specified for this problem? It is surprising that a fitness-proportionate selection is used, for example; in genetic algorithms, tournament selection is now standard. Was elitism or truncation selection explored? Is recombination possible? A greater study of the evolutionary algorithm's details that engages with recent evolutionary literature would improve the article, especially if the contribution is intended to be the evolutionary framework in full and not only the mutation operator. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable suggestions! **Motivation for choosing ISPD2005 and Additional Experiments on ICCAD2015** The major competitors, including MaskPlace, EfficientPlace, and WireMask-EA, conduct experiments on ISPD2005, so we use the same setting for a fair comparison. We also perform experiments on ICCAD2015, with results shown in Table (a) (see feedback to Reviewer pQfq), where the HPWL for placing the largest 1024 macros is reported and will be included in the revised version. **Response Regarding Baseline Methods** We omitted baselines SP-SA, NTUPlace3, RePlace, DREAMPlace, Graph Placement, and DeepPR, as they have been compared with our main competitors, MaskPlace and WireMask-EA, in their respective papers, where our competitors outperformed them. We plan to include results from these baselines in revised version for a more comprehensive comparison. **Statistical Analysis for Table 1** We conducted 5 independent trials, consistent with comparison methods, and report results as mean ± standard error. This will be included in the caption of Table 1 in revised version. A Wilcoxon rank-sum test was performed based on 5 trials. The symbols “+,” “–,” and “≈” indicate where the baseline method’s HPWL is significantly better, worse, or similar to EGPlace at a 0.05 significance level. The analysis is based on results from the released code. Due to time constraints, we ran WireMask-EA and EfficientPlace with 5 seeds on bigblue2 and bigblue4, so only partial results are reported. Additional results with 5 seeds will be included in revised version. The results show that EGPlace outperforms MaskPlace and Chipformer on all 8 benchmarks (confidence probability 0.0061). It also outperforms EfficientPlace on "bigblue2" and "bigblue4," and WireMask-EA on "bigblue4" (confidence probability 0.0061). EGPlace is statistically similar to WireMask-EA on "bigblue2" (confidence probability 0.2654). **Code is Not Included** We have uploaded our code into https://anonymous.4open.science/r/EAPlace-31A4. **The Article does not Engage as Fully with the Evolutionary Literature** We primarily focus on improving macro placement quality and efficiency, using an evolutionary framework for efficient iterations and broader search space exploration. However, we do not delve into the specifics of evolutionary algorithms. The key differences between WireMask-EA and EGPlace are: 1. WireMask-EA uses random mutation, while EGPlace employs guided reconstruction-based mutation for better sample efficiency; 2. EGPlace improves efficiency by selectively repositioning modules, avoiding the overhead of full layout reconstruction. We appreciate the recommended paper and believe the tool could help visualize the search trajectories of EGPlace and WireMask-EA, aiding comparison. We plan to conduct such visualizations and analyze their differences in future work. **Discussion of Related Paper** We appreciate the recommended paper and plan to include LaMPlace in related work section of our revised version. We also analyze the relationship between LaMPlace and EGPlace in our response to Reviewer pQfq. The trajectory visualization tool in the second paper is useful, and we plan to incorporate visualizations in future work. **Response Regarding the Evolutionary Framework** Our key ideas are as follows: 1.Mainstream RL methods suffer from high training costs and limited global context in decision-making. 2. Greedy adjustments to the entire layout can efficiently improve results. 3.An evolutionary search framework enables efficient iterations and broader exploration of the search space. Based on this, we adopt an evolutionary framework, focusing on improving layout quality and efficiency, rather than the specifics of evolutionary algorithms. We will remove the term Novel in revised version to avoid confusion. We have considered various strategies regarding selection and recombination in genetic algorithms. We think recombination is challenging, as merging different layouts often causes significant overlap. Post-processing methods like legalization and greedy reconstruction can reduce overlap but may compromise our efficiency advantage. For selection, we observe that high-quality layouts require sufficient refinement. Both EGPlace and WireMask-EA use small populations—WireMask-EA with 1, EGPlace with 5—for broader exploration. Given this setup, we did not implement a complex selection process. We use fitness-proportionate selection to determine which layout undergoes mutations, ensuring better layouts are adjusted with greater possibility while maintaining exploration. Our focus is on a simple yet effective approach that balances efficiency and layout quality. In future work, we plan to explore whether recombination can be efficiently implemented without excessive overlap while improving layout quality. If recombination is used, a larger population may be needed for sufficient exploration, requiring more effective selection strategies. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. Could the authors please clarify the choice of evolutionary algorithm? I'm glad to see this referred to as "fitness-proportionate selection" in your response, as opposed to "fitness-guided selection", as "fitness-proportionate" is a known term in the evolutionary literature. However, I must note that this is not how most evolutionary algorithms work. Consider the $(1+\lambda)$ EA: Initialize x randomly while not terminate x_p = x for i in [1,λ] x_i = mutate(x_p) if f(x_i) < f(x) x = x_i return x_p Each iteration has a population which is evaluated, selected, and then used for modification. This same loop applies to genetic algorithms, where using a random selection method like fitness-proportionate or tournament for parent selection and a second selection method like truncation is common. As noted in my first review, better engagement with the evolutionary literature would benefit this article. It isn't just to add references - the evolutionary algorithm used here is founded on some choices that have been studied in over 30 years of literature. I'll note the explanation of the selection scheme from the paper: "This fitness-guided selection is based on the intuition that layouts with higher fitness are more likely to result in high-quality layouts after adjustment. It ensures a more effective search while still allowing for random exploration." This intuition makes sense. But it has also been shown that rank-based selection, rather than fitness-based, is helpful to search as it is invariant to search space transformations. So a fitting alternative to fitness-proportionate selection would be tournament selection. However, for small population sizes like the ones used here, it isn't clear to me than any selection scheme besides truncation is necessary. Truncation selection is implicitly done in the proposed method during the exclude() method. So my understanding of the evolutionary algorithm used here is: Initialize x1...xλ randomly sort(x1...xλ, f(x1)...f(xλ)) x_best = f(x1) while not terminate x = fp_select(x1...xλ) x_p = mutate(x) if f(x_p) < f(xλ) xλ+1 = x_p sort(x1...xλ+1, f(x1)...f(xλ+1)) delete(xλ+1) if f(x_p) < f(x_best) x_best = x_p return x_best Is that the case? So fitness proportionate selection is used for parent selection and there is one new individual created per generation? The deletion of the worst individual per generation is like truncation selection, but isn't done in the same way. Below are some examples that could help ground the evolutionary algorithm: Blickle, Tobias, and Lothar Thiele. "A comparison of selection schemes used in evolutionary algorithms." Evolutionary Computation 4.4 (1996): 361-394. Jansen, Thomas, Kenneth A. De Jong, and Ingo Wegener. "On the choice of the offspring population size in evolutionary algorithms." Evolutionary Computation 13.4 (2005): 413-440. Doerr, Benjamin, Carola Doerr, and Franziska Ebel. "From black-box complexity to designing new genetic algorithms." Theoretical Computer Science 567 (2015): 87-104. Hansen, Nikolaus, et al. "Impacts of invariance in search: When CMA-ES and PSO face ill-conditioned and non-separable problems." Applied Soft Computing 11.8 (2011): 5755-5769. Could the authors try to clarify their evolutionary algorithm in terms of existing algorithms? Furthermore, could justification be given for not using standard methods, like the 1+1, $1+\lambda$, or a genetic algorithm with tournament selection? I'll note that basing the code on popular open-source evolutionary libraries like pymoo or daep would avoid this confusion. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comments! We appreciate your accurate understanding and professional perspective. Your description in the second algorithm exactly captures the core idea of our approach. As you have pointed out, our method selects a layout from the population based on its fitness and applies a mutation operation to generate one offspring. The offspring is then added to the population, and the least fit layout is subsequently removed. This removal process resembles truncation selection. In our framework, a standard evolutionary algorithm could certainly be employed. However, we chose a tailored algorithmic design guided by empirical observations of experimental results to better balance placement quality and computational efficiency. Our proposed method can be viewed as an extension of the (1+1)-EA strategy employed by the baseline method WireMask-EA, distinguished by its use of a larger population size. Notably, when the population size is set to 1, our method becomes equivalent to (1+1)-EA. The decision to increase the population size is motivated by experimental evidence. As shown in Figure 6 in the manuscript, using a moderate population size (e.g., 3–5) results in better performance than a population size of 1. We believe this improvement is due to the broader search space enabled by a larger population, allowing more individuals the opportunity to evolve and contribute to the search process, ultimately leading to better placement outcomes. Since our approach extends the (1+1)-EA by enlarging the population size (i.e., having more than one individual in the population), it requires appropriate mechanisms for selecting individuals to generate offspring and for retaining promising individuals within the population. Unlike the (1+λ)-EA, which generates multiple offspring per iteration through multiple mutation operations, or standard genetic algorithms with tournament selection, which select multiple individuals to produce offspring, our method selects only one individual from the population in each iteration and applies a single mutation to generate one offspring. This design choice aims to reduce computational overhead. We find that generating one offspring per iteration tends to be sufficient for obtaining good results. Furthermore, since both the parent and the offspring can remain in the population, they continue to have opportunities to evolve in subsequent iterations. Producing multiple offspring or selecting multiple individuals at each iteration may not be essential and could increase computational cost without providing significant additional benefits. After generating the offspring, we remove the least-fit individual from the population using a simple truncation-like strategy. Given the relatively small population size in our setting, we believe this straightforward approach is sufficient and that more complex selection strategies may not offer substantial additional benefits. We believe that our proposed method is better suited to the current setting and achieves a good balance between efficiency and performance. (1+1)-EA and (1+λ)-EA are well-suited for scenarios with a population size of one. Tournament selection may be more effective in larger populations, where selecting multiple individuals per iteration can offer more benefits. Therefore, we chose to adopt the current approach in our design.
Summary: This paper presents EGPlace, an evolutionary optimization framework for macro placement. EGPlace addresses these issues with two key parts: 1) a greedy repositioning: guided mutation operator that targets critical layout regions and 2) an efficient mask computation algorithm. Experimental results on ISPD2005 and Ariane CPU benchmarks show that EGPlace reduces wirelength compared to WireMask-EA and EfficientPlace while achieving speedups. It also performs well in congestion control and mixed-size placement scenarios. Claims And Evidence: Yes, the empirical claims are well supported. However, there are some concerns about the evaluation criteria. Methods And Evaluation Criteria: The proposed approach demonstrates promising efficacy in optimizing HPWL. However, there are several key considerations: 1. PPA Evaluation Gap. While HPWL serves as an important proxy metric, numerous studies in EDA and ML have highlighted its limited correlation with final PPA performance (e.g., timing, routed length, etc). I strongly recommend incorporating PPA evaluations using established open-source platforms such as OpenRoad [1] or the framework described in [2]. These tools enable final PPA evaluation, which would substantially imrpove this paper. 2. Expanded Benchmark. Testing on a broader range of more recent industrial chip designs (e.g., ICCAD 2015) would strengthen the generalizability claims, providing a more robust validation of the algorithm's adaptability. References: [1] OpenRoad Project. https://github.com/the-openroad-project [2] Benchmarking End-to-End Performance of AI-Based Chip Placement Algorithms. arXiv, 2024. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments and analyses on HPWL make sense to me. My main concerns are regarding the evaluation and the need for more benchmarks. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: There are many recent papers on reinforcement learning for chip placement [1-3], which I believe should be at least discussed if not be compared. [1-2] can be categorized into RL for adjustment-based methods. [3] applies a learnable mask and achieves sota performance. [1] Reinforcement Learning Policy as Macro Regulator Rather than Macro Placer. NeurIPS, 2024. [2] Mixed-Size Placement Prototyping Based on Reinforcement Learning with Semi-Concurrent Optimization. ASPDAC, 2025. [3] LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement. ICLR, 2025. Other Strengths And Weaknesses: Strengths 1. The proposed mutation operator make sense to me, which can significantly improve exploration efficiency and HPWL results compared to random mutations in WireMask-EA. 2. The efficient mask computation is also important to rapidly evaluate potential module positions, reducing computational complexity from quadratic to linear. 3. The framework illustration figure is impressive and clear. Weaknesses 1. See the evaluations mentioned above. Other Comments Or Suggestions: 1. WireMask-EA is proposed at NeurIPS 2023 rather than 2024. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! Below, we provide a detailed response to your questions and comments. If any of our responses fail to sufficiently address your concerns, please inform us, and we will promptly follow up. **Experiments on the ICCAD 2015 Benchmark** In order to show the generalizability of EPlace, we further conduct tests on 5 circuits on the ICCAD 2015 dataset, selecting 1024 macros with the largest sizes for placement. The macro placement results, compared to EfficientPlace, are presented in Table (a). ## Table (a): HPWL (×10⁷) for Macro Placement on the ICCAD 2015 Benchmark | Method | Superblue1 | Superblue3 | Superblue4 | Superblue5 | Superblue7 | |--------------- |------------|------------|------------|------------|------------| | EfficientPlace | 77.59 | 34.33 | 84.93 | 539.40 | 29.05 | | EGPlace | 12.47 | 29.47 | 17.60 | 78.57 | 29.21 | The results show that EGPlace outperforms EfficientPlace on 4 out of 5 benchmarks. The significantly higher HPWL of EfficientPlace on Superblue1, 4, and 5 may result from the poor placement of the first few modules in nodes close to the root in MCTS, which ultimate affects the overall placement quality. We plan to include the full results in the revised version. We also attempted macro placement using DreamPlace 4.1.0 on the BookShelf file but encountered failures during the legalization stage. We will continue to investigate this issue and, if time permits, report the full results on the ICCAD 2015 benchmark, along with results for DreamPlace and other baseline methods. **Discussion on Recent Related Work** We will incorporate related work [1-3] into the related work section. Both [1] and [2] train RL policies that iteratively adjust module locations, as refining modules on the full layout enables capturing more comprehensive state information, which is beneficial for improving layout quality. We share the same observations with these methods. However, unlike these methods, we adjust the layout using efficient heuristic rules combined with a certain degree of randomness instead of training RL agents. Specifically, we select modules based on their scores and reposition them greedily, while utilizing an evolutionary algorithm to maintain a pool of good layouts. The search strategy we used is simple but effective in obtaining good quality layouts. Additionally, our method provides significant advantages in terms of efficiency. LaMPlace [3] makes significant advancements in incorporating cross-stage PPA metrics to guide layout generation. It trains a predictor to estimate these metrics through offline learning and generates learnable masks to assist placement. Our method, EGPlace, which leverages an evolutionary framework and adjusts module locations through a greedy reconstruction-based operator, is orthogonal to LaMPlace. The learnable masks related to PPA metrics generated by LaMPlace can be integrated into our approach to guide module adjustment, potentially achieving better performance in terms of PPA metrics. **Response on PPA Evaluation** We have attempted to use OpenRoad to evaluate the PPA metrics over the past few days. However, as noted in the ChipBench paper, OpenRoad is not compatible with the ISPD2005 and ICCAD2015 datasets due to the lack of essential information (e.g., necessary design kits). Therefore, We have to switch to "ariane133" benchmark which is supported by OpenRoad. The evaluation process is time-consuming. Due to time constraints, we plan to upload our results once the evaluation is completed. **WireMask-EA is proposed at NeurIPS 2023 rather than 2024** We sincerely apologize for the typo. We will correct this issue in the revised version.
null
null
null
null
null
null
CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models
Accept (poster)
Summary: This paper studies protective perturbations for LDMs, where the success of existing methods are based on distorted latent representations. To examine these protections, the authors propose Contrastive Adversarial Training (CAT), which inserts lightweight adapters into the latent autoencoder. Specially, CAT realigns the latent representations, reducing the effectiveness of protective perturbations. Claims And Evidence: CAT can effectively neutralize representation distortion-based protective perturbations through contrastive adversarial training. Methods And Evaluation Criteria: The proposed adaptive attack utilzes a contrastive adversarial loss with adapters inserted into the latent autoencoder, thereby “attacking” the distortions caused by protective perturbations. Theoretical Claims: No Experimental Designs Or Analyses: Experimental results are well organized. Authors compare CAT against nine protective perturbation methods under different customization frameworks (e.g., DreamBooth and LoRA). Both quantitative results (improvements in FSS and FQS) and qualitative examples support their claims. Supplementary Material: No Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: It is good to try training-free DM customization methods (e.g., IP-Adapter). Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments (Q). We hope that our responses (A) have fully addressed the concerns, and remain committed to clarifying any further questions that may arise during the discussion period. > **Q1: It is good to try training-free DM customization methods (e.g., IP-Adapter).** **A1:** We appreciate this insightful suggestion. Indeed, training-free customization methods such as IP-Adapter represent a promising direction for low-cost and scalable diffusion model customization. While our current study focuses on training-based customization approaches (e.g., DreamBooth and LoRA) due to their widespread usage and controllability in evaluating robustness, extending CAT to training-free settings is an exciting future direction. We plan to incorporate IP-Adapter into future evaluations to further evaluate the generalization of CAT beyond current training-based frameworks. **Table 3.** Quantitative results for object-driven image synthesis using CAT methods customized in DreamBooth for the CelebA-HQ dataset compared to the Noisy-Upscaling (NU) and Gaussian Filtering (GF) methods. | CelebA-HQ | FSS$\uparrow$ | | | | FQS$\uparrow$ | | | | | --------- | ------------- | ------ | ----- | ----- | ------------- | --------- | --------- | ----- | | | CAT-both | CAT-en | NU | GF | CAT-both | CAT-en | NU | GF | | AdvDM(+) | **0.643** | 0.529 | 0.531 | 0.492 | 0.431 | 0.448 | **0.481** | 0.352 | | AdvDM(-) | **0.623** | 0.571 | 0.469 | 0.607 | 0.549 | **0.611** | 0.498 | 0.526 | | Mist | **0.572** | 0.501 | 0.491 | 0.488 | **0.597** | **0.580** | 0.475 | 0.493 | | SDS(+) | **0.602** | 0.499 | 0.599 | 0.409 | 0.413 | 0.423 | **0.503** | 0.302 | | SDS(-) | **0.678** | 0.599 | 0.468 | 0.583 | **0.597** | **0.587** | 0.494 | 0.493 | | SDST | **0.594** | 0.485 | 0.470 | 0.446 | 0.587 | **0.588** | 0.474 | 0.464 | | Glaze | **0.610** | 0.577 | 0.533 | 0.547 | 0.618 | **0.676** | 0.496 | 0.533 | | Anti-DB | **0.662** | 0.597 | 0.540 | 0.575 | 0.608 | **0.664** | 0.469 | 0.543 | | MetaCloak | **0.642** | 0.578 | 0.521 | 0.540 | 0.460 | **0.475** | 0.395 | 0.324 | **Table 4.** Quantitative results for object-driven image synthesis using CAT methods customized in DreamBooth for the VGGFace2 dataset compared to the Noisy-Upscaling (NU) and Gaussian Filtering (GF) methods. | VGGFace2 | FSS$\uparrow$ | | | | FQS$\uparrow$ | | | | | --------- | ------------- | --------- | --------- | --------- | ------------- | --------- | ----- | ----- | | | CAT-both | CAT-en | NU | GF | CAT-both | CAT-en | NU | GF | | AdvDM(+) | 0.534 | **0.560** | 0.518 | 0.506 | 0.481 | **0.578** | 0.506 | 0.363 | | AdvDM(-) | **0.564** | 0.547 | 0.529 | 0.563 | 0.635 | **0.676** | 0.563 | 0.506 | | Mist | 0.557 | 0.521 | **0.566** | 0.518 | 0.662 | **0.701** | 0.518 | 0.437 | | SDS(+) | 0.486 | **0.508** | 0.498 | 0.402 | 0.438 | **0.569** | 0.402 | 0.281 | | SDS(-) | 0.570 | 0.569 | 0.509 | **0.593** | **0.700** | 0.671 | 0.593 | 0.558 | | SDST | **0.559** | 0.546 | 0.521 | 0.538 | 0.627 | **0.671** | 0.538 | 0.482 | | Glaze | **0.607** | 0.576 | 0.503 | 0.549 | **0.733** | 0.723 | 0.549 | 0.562 | | Anti-DB | **0.584** | 0.546 | 0.566 | 0.548 | 0.636 | **0.656** | 0.548 | 0.499 | | MetaCloak | 0.560 | **0.631** | 0.566 | 0.542 | 0.504 | **0.633** | 0.542 | 0.349 |
Summary: This paper examines the effectiveness of adversarial perturbations in protecting data from unauthorized customization in LDMs. The authors reveal that these perturbations work by distorting latent representations and propose CAT as an adaptive attack that reduces their effectiveness. Experimental results highlight the vulnerability of current protection methods. Claims And Evidence: Most claims are supported by clear evidence. Methods And Evaluation Criteria: Yes, but could be improved. For instance, in Table 1 or for the style mimicry scenario, it would be more insightful to evaluate using the fidelity metrics, such as FID. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The study is reasonably demonstrated, but some expectations are not met. There are no quantitative results for style mimicry and in addition, there is no comparisons with other purification methods. The authors only provide results with and without their method, which makes the effectiveness of the proposed approach unconvincing. Supplementary Material: Yes. Relation To Broader Scientific Literature: The intuition and experiments focus on mimicry tasks, which have emerged as significant concerns in the context of generative AI, particularly regarding copyright issues. Essential References Not Discussed: No Other Strengths And Weaknesses: The core rationale behind the proposed method is reasonable. However, as noted in the “Experimental Design” section, the lack of sufficient experiments and analysis limits the convincingness of the work. Other Comments Or Suggestions: No. Questions For Authors: I am curious about the recent trend of research primarily focusing on mimicry attacks to circumvent existing protection methods. I think this approach inherently has limitations in advancing genuinely robust protection methods, including the current study. Given this purification, how might we move towards stronger protection methods, or at least derive valuable insights to guide future research directions? It would be very insightful and helpful to the future reader. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments (Q). We hope that our responses (A) have fully addressed the concerns, and remain committed to clarifying any further questions that may arise during the discussion period. > **Q1: Yes, but could be ... using the fidelity metrics, such as FID.** **A1:** We thank the reviewer for pointing this out. In the evaluation of **Table. 1 below**, we use the FID metric to assess generation quality. Specifically, for each identity, we generate 30 images per prompt (two prompts in total), and compute FID by comparing the results to images generated by a customization model trained on clean samples. We observe that both CAT-both and CAT-en consistently achieve lower FID scores than the baseline across both datasets and all evaluated protections. These results demonstrate the effectiveness of our CAT under the evaluated settings. **Table 1.** Quantitative results for object-driven image synthesis using CAT methods customized in DreamBooth for CelebA-HQ and VGGFace2 datasets compared to the Noisy-Upscaling (NU) and Gaussian Filtering (GF) methods. | FID$\downarrow$ | CelebA-HQ | | | VGGFace2 | | | | --------------- | --------- | --------- | --------- | -------- | --------- | --------- | | | Baseline | CAT-both | CAT-en | Baseline | CAT-both | CAT-en | | AdvDM(+) | 340.0 | 264.9 | **223.7** | 435.2 | 274.9 | **249.0** | | AdvDM(-) | 134.3 | 104.0 | **102.0** | 203.9 | 189.4 | **188.6** | | Mist | 263.6 | **133.6** | 136.1 | 359.6 | **187.8** | 198.9 | | SDS(+) | 327.2 | 277.4 | **247.6** | 363.9 | 295.4 | **255.7** | | SDS(-) | 125.4 | **103.4** | 110.2 | 208.9 | **183.3** | 185.6 | | SDST | 223.0 | **133.3** | 133.5 | 335.8 | **195.3** | 200.4 | | Glaze | 196.7 | 100.1 | **90.4** | 228.0 | **160.9** | 191.3 | | Anti-DB | 180.4 | 131.4 | **106.4** | 320.5 | 202.1 | **190.8** | | MetaCloak | 175.0 | 179.6 | **171.6** | 316.3 | 200.4 | **170.9** | > **Q2: The core rationale ... proposed approach unconvincing.** **A2:** We sincerely thank the reviewer for the valuable suggestion. To the best of our knowledge, this is the first work that systematically evaluates the robustness of nine existing protective perturbation methods across two downstream tasks, which requires careful data preparation and repeated experiments. That said, we totally agree that for the style mimicry task, only qualitative results were provided in the main paper due to space limitations. To address this, we include quantitative results **in Table 2 below** in CLIP-IQA score [r2] for both CAT-both and CAT-en settings. We observe that in the style mimicry task, CAT consistently outperforms the baseline across all evaluated protection methods, further demonstrating the effectiveness of our approach. **Table 2.** Quantitative results for style mimicry using CAT methods customized in DreamBooth for the WikiArt dataset. | CLIP-IQA$\uparrow$ | Baseline | CAT-both | CAT-en | | ------------------ | -------- | -------- | --------- | | AdvDM(+) | 0.343 | 0.390 | **0.621** | | AdvDM(-) | 0.463 | 0.536 | **0.697** | | Mist | 0.345 | 0.465 | **0.694** | | SDS(+) | 0.285 | 0.366 | **0.366** | | SDS(-) | 0.501 | 0.481 | **0.723** | | SDST | 0.406 | 0.485 | **0.712** | | Glaze | 0.532 | 0.614 | **0.730** | | Anti-DB | 0.315 | 0.544 | **0.672** | > **Q3: I am curious about the ... and helpful to the future reader.** **A3:** Thank you for the insightful comment. We fully agree that while adaptive attacks reveal current vulnerabilities in protective perturbations, their broader value lies in informing the development of more effective defenses. In this regard, we would like to emphasize that our CAT method not only exposes the limitations of existing protections but also offers potential insights for future defense strategies. In particular, our findings suggest that effective protection for LDMs may benefit from incorporating defense mechanisms that explicitly consider the diffusion process, especially in end-to-end optimized frameworks, rather than relying solely on latent autoencoders, which can be more easily compromised by adaptive attacks. We believe this perspective highlights a promising direction for designing more robust protective perturbations to better safeguard the IP rights of data owners. A more detailed discussion will be added in the revised version. [r2] Wang, J., Chan, K. C., & Loy, C. C. (2023, June). Exploring clip for assessing the look and feel of images. In *Proceedings of the AAAI conference on artificial intelligence* (Vol. 37, No. 2, pp. 2555-2563).
Summary: This paper investigates adversarial examples as protective perturbations in latent diffusion models. The authors reveal that the reason why adversarial examples are effective is primarily due to the distortion of their latent representations. Based on this observation, they propose the CAT method to attack protective methods, highlighting their lack of robustness. Claims And Evidence: The claim that the experiments in section 3 explain why adversarial examples are effective as protective perturbations seems an overclaim to me. The most direct conclusions of the two experiments in section 3 are that adversarially perturbed images lead to larger distortions in latent representations and that the diffusion model is able to learn adversarial examples. It is expected to have an explicit measure of generation degradation to show the correlation between large distortions and protection effectiveness. Even if there is a strong correlation, the causality should be carefully claimed because there may be other potential factors. Methods And Evaluation Criteria: The proposed method and evaluation makes sense for the task. Theoretical Claims: no theoretical claims in this paper Experimental Designs Or Analyses: See “Claims and Evidence” part. Other experiments are sound. Supplementary Material: I read all appendices, including case studies, experiment details, and additional experiments. Relation To Broader Scientific Literature: Latent Representation Distortion: Previous works [1,2] built adversarial examples to protect diffusion models. However, a deeper investigation into how the adversarial examples work is lacking. This paper conducts qualitative and quantitative experiments to analyze adversarial examples as protective methods. Adaptive Attack: Most existing attacks rely on purification. This paper proposes a model-based adaptation method. [1] Liang, Chumeng, et al. "Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples." arXiv preprint arXiv:2302.04578 (2023). [2] Xue, Haotian, et al. "Toward effective protection against diffusion-based mimicry through score distillation." The Twelfth International Conference on Learning Representations. 2023. [3] Hönig, Robert, et al. "Adversarial perturbations cannot reliably protect artists from generative ai." arXiv preprint arXiv:2406.12027 (2024). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: A deeper investigation into how adversarial examples are effective as protective perturbations helps better understand this kind of method. The model-based adaptive attack method is different from previous purification-based attacks, which shows the novelty of this work. The authors evaluate the attack against nine protection methods, demonstrating the effectiveness of the proposed attack. Weaknesses: See “Claims and Evidence” part. The conclusion in section 3 seems an overclaim to me. If I misunderstood, please correct me. Lack of baseline methods. Even if the model-based adaptive attack is novel, I still suggest comparing it with previous methods (e.g. IMPRESS++[1]) to show whether the model-based adaptive attack is more effective than other methods. The evaluation metrics “FSS” and “FQS” are not standard but reasonable. Separate reports of Retina-FDR and ISM may enhance understanding of the results. And so do TOPIQ-FDR and FIQ. In addition, some traditional evaluation metrics like PSNR and SSIM are also expected to be adopted to show the quality of the generated images. [1] Hönig, Robert, et al. "Adversarial perturbations cannot reliably protect artists from generative ai." arXiv preprint arXiv:2406.12027 (2024). Other Comments Or Suggestions: All the quotation marks should be “ ” instead of ” ”. The header shows “Submission and Formatting Instructions for ICML 2024”. Questions For Authors: According to Table 1, the “CAT-de” is consistently weaker than “CAT-both” and “CAT-en”, and is sometimes even weaker than baseline. What are the potential reasons behind this? Since “CAT-both” and “CAT-en” are comparable on different metrics, how to choose which method to use in practice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments (Q). We will correct the identified typos and incorporate the suggested references you mentioned. We hope that our responses (A) have fully addressed the concerns, and remain committed to clarifying any further questions that may arise during the discussion period. > **Q1: The claim ... other potential factors.** **A1:** We would like to clarify that the observed strong correlation refers to the effectiveness of adversarial noise as protective perturbations and the distortion in their latent representations. This conclusion is supported by the following observations: 1. **Latent representation distortion:** As shown in Fig. 3, adversarial noise (red dots) causes significantly more distortion in the latent space compared to random perturbations (yellow dots) under the same budget. After applying CAT, the adversarial samples (green dots) are noticeably re-aligned with the clean samples (blue dots). This observation is quantitatively verified in Fig. 4. 2. **Learnability of perturbed latent representations:** Fig. 5 shows that adversarially perturbed latent representations can still be effectively learned by the diffusion model, with learnability comparable to clean and randomly perturbed samples. 3. **Degradation on generation quality:** Fig. 6 and Table 1 in the manuscript present results when fine-tuning on adversarially perturbed data. Without CAT (baseline), generation quality drops significantly, while applying CAT improves quality to a large extent. Together, these obervations and experimental results support our conclusion that the unlearnability of adversarially perturbed samples primarily comes from their latent representation distortion. That said, we fully agree with the reviewer that, even in the presence of a strong correlation, causality should be claimed with caution, as other factors may also play a role. We thank the reviewer for this important insight, and will emphasize that this conclusion is mainly based on empirical experimental observations and there are other potential factors. > **Q2: Lack of baseline ... methods.** **A2:** We have compared our proposed CAT with baseline adaptive attacks against protective perturbations: Noisy-Upscaling [r1] (optimization-based) and Gaussian Filtering (low-pass filtering-based). *IMPRESS++ is not open-sourced yet, so we instead adopt Noisy-Upscaling, the superior adaptive attack from the same paper.* **Due to space limitations, experimental details are provided in our response to Reviewer GQca, with results on the CelebA-HQ dataset shown in Table 3, and on the VGGFace2 dataset in Table 4 (both in response to Reviewer jvVV).** We apologize for any inconvenience this may cause and appreciate your understanding. We can observe that our proposed CAT (CAT-both and CAT-en) consistently achieves comparable or superior performance to both Noisy-Upscaling and Gaussian Filtering across all protective perturbations, in terms of both FQS and FSS. These results highlight the competitive effectiveness of CAT compared to existing purification-based methods. > **Q3: The evaluation ... generated images.** **A3:** We appreciate the reviewer for bringing this to our attention. We will report the results for Retina-FDR and ISM separately, as well as those for TOPIQ-FDR and FIQ, in the supplementary materials. **Due to space limitations, we present the traditional evaluation metric FID to assess the quality of generated images in Table 1 (in response to Reviewer 6VvR).** It can be observed that our proposed CAT (either CAT-both or CAT-en) consistently improves generation quality in terms of FID across both datasets and all evaluated protection methods. These results demonstrate the effectiveness of our approach. > **Q4: According to Table 1 ... behind this?** **A4:** The potential reason behind this that CAT-de only adds adapters to the VAE decoder, which has limited impact on realigning latent representations. Fine-tuning only the decoder is more challenging, as the diffusion model learns from distorted latents that are highly diverse, making accurate reconstruction difficult. The weaker performance of CAT-de compared to CAT-en and CAT-both further supports our observation that latent distortion is the key factor behind the effectiveness of adversarial noise as a protective perturbation. > **Q5: Since “CAT-both” ... use in practice?** **A5:** In our experiments, we keep the parameter size the same for CAT-both and CAT-en. Specifically, CAT-both uses half the adapter rank of CAT-en, as it adds adapters to both the encoder and decoder. While their performance varies across tasks, datasets, and metrics, the overall results are comparable. That said, we speculate that CAT-en may perform better on tasks requiring more specific or dense latent representations, **such as the style mimicry task in Table 2 (in response to Reviewer 6VvR)**, as it enables stronger latent alignment during customization.
Summary: This paper proposes an attack named CAT that can break the protection of preventing diffusion models from effectively learning unauthorized data being perturbed by defensive noise. The authors first empirically identify that the mechanism behind existing defensive perturbations is to make embeddings of perturbed images look different from the embeddings of clean images. Based on this observation, the authors propose to break the protection via improving the robustness of the encoder in diffusion models against those defensive perturbations. Experiments are conducted on the stable-diffusion-v2.1 model with various protection methods. ## **update after rebuttal** After reading the rebuttal, I think this paper has novel results. So I decide to maintain my current score but tend to acceptance. For the authors, they should update the paper to include: - Additional results on data-augmentations. - Discussions on RobustCLIP. Claims And Evidence: See **Weaknesses/Questions/Suggestions**. Methods And Evaluation Criteria: See **Weaknesses/Questions/Suggestions**. Theoretical Claims: N/A Experimental Designs Or Analyses: See **Weaknesses/Questions/Suggestions**. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The concept of "preventing unauthorized image usage via defensive noises" originally comes from the research of "unlearnable examples". Therefore, the authors should also cite and review papers on "unlearnable examples" such as: 1. Huang et al. "Unlearnable Examples: Making Personal Data Unexploitable." ICLR 2021. 2. Fu et al. "Robust Unlearnable Examples: Protecting Data Against Adversarial Learning." ICLR 2022. 3. Ren et al. "Transferable Unlearnable Examples." ICLR 2023. 4. Liu et al. "Stable unlearnable example: Enhancing the robustness of unlearnable examples via stable error-minimizing noise." AAAI 2024. Other Strengths And Weaknesses: **Strengths:** 1. I like the observation of identifying the mechanism behind existing protective perturbations. It is intuitively sound and makes sense. **Weaknesses/Questions/Suggestions:** 1. I think the authors should compare their proposed method with some simple image augmentation-based defenses, for example, various low-pass filters. 2. Two major drawbacks of the proposed method are that to perform adversarial training following Eq.(1), the adversary needs to: (1) know in advance what defensive noise is leveraged by $x_a$, and (2) generate a set of protected training images for the specified defensive noise. 3. I think the AT method from RobustCLIP [r1], which aims to enhance the adversarial robustness of CLIP image encoder against input perturbation within a pre-defined radius, is much better than the proposed CAT method. Although this method is originally designed for CLIP, it can be directly applied to any image encoders such as the VAE used in stable-diffusion-v2.1. Under the RobustCLIP's AT method, the adversary can train the VAE encoder only once to make it be robust to multiple types of protective perturbations. However, the proposed CAT method needs to retrain the VAE encoder every time for new defensive noise. 4. I suggest the authors add a small section explaining how the stable diffusion works with encoders. The current paper is very difficult for readers without any background knowledge about stable diffusion to understand the threat model of this work. 5. What do those blue points $z_0$ mean in Fig.3? It seems that they are never explained in the paper. **Reference** [r1] Schlarmann et al. "Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models." ICML 2024. Other Comments Or Suggestions: See **Weaknesses/Questions/Suggestions**. Questions For Authors: See **Weaknesses/Questions/Suggestions**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments (Q). We will incorporate the suggested references you mentioned. We hope that our responses (A) have fully addressed the concerns, and remain committed to clarifying any further questions that may arise during the discussion period. > **Q1: I think the authors ... various low-pass filters.** **A1:** We have compared our proposed CAT with two image augmentation-based adaptive attacks towards the protective perturbation: Noisy-Upscaling [r1] (based on optimization) and Gaussian filtering (based on low-pass filters). The evaluation is conducted on both the CelebA-HQ and VGGFace2 datasets using CAT-both and CAT-en, following the same experimental setting. For Noisy-Upscaling, we adopt default configurations provided in the original paper, and for Gaussian Filtering, we use a Gaussian kernel with size equal to 5 and $\sigma = 1.0$. **Due to space limitations, we present the results on the CelebA-HQ in Table 3 and on the VGGFace2 dataset in Table 4 (both in response to Reviewer jvVV)**. We apologize for any inconvenience this may cause and appreciate your understanding. We can observe that our proposed CAT (CAT-both or CAT-en) consistently achieves comparable or superior performance to both Noisy-Upscaling and Gaussian Filtering across all protective perturbations, in terms of both FQS and FSS. These results highlight the competitive effectiveness of CAT compared to existing purification-based methods. > **Q2: Two major drawbacks... for the specified defensive noise.** **A2:** We would like to clarify two key aspects of our threat model: the data owner has the clean data $x_c$ and intends to share it publicly. To prevent unauthorized customization, the owner applies a perturbation to generate the protected data $x_a$, which is then released. **In this setting, the adversary only has access to the protected data $x_a$ and is unaware of the specific protection method used by the data owner. This protected data already contains the defensive noise, and no additional generation is required by the adversary.** We want to demonstrate that, even without knowledge of the protection technique and with access only to the protected data, the adversary can still effectively learn using our proposed CAT. This is achieved by using our proposed **contrastive adversarial loss** during customization. We will include a more detailed explanation of the threat model in the revised version. > **Q3: I think the AT method from RobustCLIP [r1] ... for new defensive noise.** **A3:** We fully agree that RobustCLIP presents an effective adversarial training (AT) framework for enhancing the robustness of CLIP-based vision encoders, particularly against perturbations that disrupt text-image semantic alignment. That said, we would like to clarify key differences between RobustCLIP and our proposed CAT: 1. **Optimization objective:** RobustCLIP aims to *preserve semantic alignment between text and image representations*. In contrast, CAT defends against protective perturbations that distort the latent representation in the autoencoder, using a *contrastive adversarial loss* to explicitly enforce latent space alignment. 2. **Implementation framework:** While it is possible to apply RobustCLIP to the VAE encoder in LDMs, this would involve fine-tuning the VAE on self-generated adversarial-clean pairs, which is the same idea aligned with CAT’s motivation. However, *CAT achieves this via lightweight adapters* applied during customization on protected data. These adapters are detachable at inference time, leaving the original model performance unaffected. In contrast, fine-tuning of RobustCLIP would require significantly more data and adversarial examples, and would modify the entire encoder, potentially degrading performance on the original task. Despite these differences, we greatly appreciate the reviewer’s suggestion and will discuss it in the related work section to highlight these differences. > **Q4: I suggest the authors add a small section ... of this work.** **A4:** We totally agree that a brief explanation of how Stable Diffusion operates as an LDM with the latent autoencoder would improve the paper’s readability for those unfamiliar with this background. We will add a dedicated section to clarify this in the related work. > **Q5: What do those blue points $z_0$ mean in Fig.3? It seems that they are never explained in the paper.** **A5:** We apologize for the confusion caused by the incorrect labeling in the original Fig. 3. The blue points were intended to represent $z_c$, which is the latent embeddings of clean samples as correctly noted in the caption. We have updated both the figure and its caption to reflect this correction. Thank you for pointing this out. [r1] Hönig, R., Rando, J., Carlini, N., & Tramèr, F. Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI. In *The Thirteenth International Conference on Learning Representations*. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal, which resolved most of my concerns. Therefore, I will maintain my current score but tend to accept this paper. Please include: (1) additional results on data-augmentations, and (2) discussions on RobustCLIP in your revised paper. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and positive feedback. We're glad to hear that most of your concerns have been resolved, and we appreciate your support. As you suggested, we will make sure to include additional experiments and a discussion on RobustCLIP in the revised version of the paper. Thank you again for your helpful comments.
null
null
null
null
null
null
Differentially Private Analysis for Binary Response Models: Optimality, Estimation, and Inference
Accept (poster)
Summary: This paper proposes a new method for ensuring label differential privacy in classification tasks through the randomized response mechanism with optimality guarantees. Furthermore, the paper proposes differentially private confidence intervals based on the former method. Claims And Evidence: Most of the claims are supported by clear evidence. However, some concerns remain: (i) the Validity of Def. 3.1 (see below), and (ii) the meaning of "optimality". The paper states that the method borrows the T-optimality criterion. The criterion and the optimality guarantees, however, are never introduced. It remains unclear if the proposed method is indeed "optimal" (and the precise meaning thereof) or only an improvement over the standard RR mechanism. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense but are limited. Considering more high-dimensional and complex settings would be interesting to showcase and evaluate the optimality of the proposed LabelDP framework in practical settings. Theoretical Claims: Not all theoretical claims are stated as such and therefore are not proven. Especially for Def. 3.1. it is difficult to check the validity. Experimental Designs Or Analyses: The scope of the experiments is limited and very simplistic. Furthermore, the results are not reproducible, as no code is provided for the experiments. The evaluation of the empirical results is not coherent—for example, lines 378-381 state that significant differences between both scenarios can be observed. However, this is not the case; rather, it raises a different question: The coverage of the CIs of RRbR is always far from the desired level. Why is this the case? Supplementary Material: Yes. The supplementary material only consists of proofs of the lemmas and theorems. Relation To Broader Scientific Literature: The paper is closely related to literature on LabelDP using the RR mechanism. It distinguishes itself from the former in that it considers estimation performance when enforcing the privacy constraints. Essential References Not Discussed: / Other Strengths And Weaknesses: **Strengths:** - To the best of my knowledge, the idea of designing a LabelDP mechanism that considers specific optimality criteria for estimation is novel. **Weaknesses:** - The method is only compatible with GLMs. Non-parametric estimation is thus not possible. - The method only applies to binary labels. This limits the contribution of the work. - The confidence interval construction requires multiple assumptions that are unlikely to hold in practice. Other Comments Or Suggestions: / Questions For Authors: 1. Definition 3.1: This is not a definition. In my opinion, this is rather a theorem which has to be either references or proven. 2. Lines 218/219: Why is this "default assumption" reasonable? How can it be tested? 3. Assumption 4.2: Why is this assumption reasonable? Has this assumption been made in related work as well? 4. Lines 163/164: What is meant by "unsupervised aspects of the response"? 5. Experiments: How is the confidence interval for RRbR calculated? How can the extreme underoverage in Figures 3 & 5 be explained? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Validity of Definition 3.1**, **Theoretical Claims** and **Question 1.** To satisfy $(\epsilon, \delta)$-LabelDP (and similarly for $\epsilon$-DP), the conditional probabilities $p_{00}=P(Y^*=0 \mid Y=0)$ and $p_{11}=P(Y^*=1 \mid Y=1)$, which lie in $(0,1)$, must meet the following privacy constraints: $$ P[Y^*=0 \mid Y=0] \leq e^{\epsilon} P[Y^*=0 \mid Y=1]+\delta, \\; \\; P[Y^*=1 \mid Y=1] \leq e^{\epsilon} P[Y^*=1 \mid Y=0]+\delta, $$ and $$ P[Y^*=0 \mid Y=1] \leq e^{\epsilon} P[Y^*=0 \mid Y=0]+\delta, \\; \\; P[Y^*=1 \mid Y=0] \leq e^{\epsilon} P[Y^*=1 \mid Y=1]+\delta. $$ Following Holohan et al. (2017) and Wang (2015), we also assume $p_{00}>0.5$ and $p_{11}>0.5$, implying $p_{00}+p_{11}>1$, to reflect the idea that the RR mechanism is more likely to return a truthful label than a flipped one. It is then easy to verify that these inequalities lead to the feasible region of $(p_{00}, p_{11})$ pairs in Definition 3.1. Since this region is explicitly constructed from existing results, we present it as a definition rather than a theorem. However, we agree it would be clearer to write $p_{00}>0.5$ and $p_{11}>0.5$ directly in Definition 3.1, and we will update the paper accordingly. **Claims and Evidence Regarding the Optimality Criterion.** Our $T$-optimality refers to selecting the best RR mechanism by maximizing the trace of the Fisher Information Matrix (FIM) within the feasible region defined in Definition 3.1. The trace summarizes the total information across all parameters and reflects the average precision of estimates. Hence, our approach ensures the most statistically efficient estimation under the LabelDP constraint. This is not merely an empirical improvement over standard RR mechanisms, but a principled optimal solution within a clearly defined feasible set. In the revision, we will formally define the $T$-optimal criterion and clarify its interpretation. **Methods and Evaluation Criteria** and **Weakness 1.** While our main focus is on binary response models for theoretical clarity, our method can naturally extend to multiclass and high-dimensional settings. Initial results are promising, and we plan to explore these extensions in future work. **Experimental Designs or Analyses.** We confirm that the R code was submitted as supplementary material. During the rebuttal, we also ran high-dimensional simulations, which showed strong performance, further supporting our method’s robustness. In simulation studies, scenario differences mainly come from varying correlation and variance. RRbR shows low coverage because it ignores covariates when privatizing labels, increasing variance. Our method incorporates covariates, leading to more accurate and stable coverage. **Weakness 2.** Please refer to our responses to Weakness 1 and Question 1 in the rebuttal to Reviewer 7wrY. **Weakness 3.** Assumptions 4.8 and 4.9 are standard in asymptotic inference. Assumption 4.8 (on convexity and smoothness of the log-likelihood) holds in most binary regression settings, and Assumption 4.9 (positive definiteness of the FIM) ensures model identifiability, a common condition in GLMs. While asymptotic results assume large samples, these assumptions often hold well in practice, even with moderate sample sizes-as supported by our real data analysis. **Question 2.** As explained above, we follow Wang (2015) to impose $p_{00} > 0.5$ and $p_{11} >0.5$ (our default assumption) to ensure that our $T$-optimal RR mechanism still tends to return truthful responses, more accurately than random guessing. **Question 3.** We need Assumption 4.2 in our proof of Lemma 4.3 to guarantee that $\frac{\partial \mathcal{M}(\boldsymbol{\beta} ; p_{00}, p_{11})}{\partial p_{00}}>0$ (lines 571-572). Given that our feasibility region in Definition 3.1 explicitly enforces $p_{00} > 0.5$ and $p_{11} >0.5$, this assumption is naturally satisfied. **Question 4.** By ''unsupervised aspects of the response'', we intended to highlight that traditional RR designs ignore covariate $X$ when privatizing the response. In contrast, our approach leverages covariate $X$ to inform the design of the RR mechanism, effectively making it a supervised design approach. We will revise the wording to make this clearer. **Question 5.** The confidence intervals for RRbR are calculated by fitting the same binary response model used in our method, but using privatized responses generated by the traditional RR mechanism. Classical methods based on the asymptotic normality of maximum likelihood estimators are then applied to construct the intervals. However, this approach does not account for the additional bias and variability introduced by the RR mechanism, nor does it incorporate covariate information to mitigate such effects. As a result, it often leads to inaccurate and poorly calibrated confidence intervals, as shown in Figures 3 and 5. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal addressing my comments. My concerns regarding Definition 3.1 and the T-optimality are resolved. However, significant concerns regarding the applicability and thus the contribution to the community remain: - The method only applies to GLMs. Therefore, the contribution to the general ML community of ICML is limited. - More complex (real-world) settings are necessary to evaluate the method. At the moment, the provided evaluation is insufficient. - The additional noise due to the privacy mechanism must be accounted for when reporting confidence intervals. Reporting invalid CIs is misleading and uninformative for the reader. Note that multiple methods for providing CIs under DP exist in the literature. Overall, I believe the contribution is better suited for a targeted outlet on privacy. Furthermore, I hope the authors incorporate a more in-depth evaluation of their method and reframe the unclear paragraphs in the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the thoughtful comments. Below, we address your additional concerns: **The method only applies to GLMs. Therefore, the contribution to the general ML community of ICML is limited.** We want to emphasize that our work introduces a novel direction in differential privacy (DP) by integrating experimental design principles into the construction of private mechanisms, as all three reviewers pointed out. To the best of our knowledge, this is a new and underexplored area in the privacy literature. Given that this is a new direction, it is natural and methodologically necessary to begin with parametric models such as generalized linear models (GLMs), where the concept of Fisher Information is well-defined and its analytical tractability allows for rigorous theoretical development. This mirrors the trajectory of many pioneering works in DP, which initially focused on basic statistical tasks such as mean and median estimation, before extending to more structured models like linear and logistic regression (e.g., [1]–[6], all published at ICML). These parametric models provide a mathematically tractable foundation, making it feasible to develop and rigorously validate new ideas under formal DP constraints. This line of work reflects ICML's strong tradition of supporting rigorous, theory-driven contributions at the intersection of privacy, learning, and statistical inference. Our contribution follows in this tradition by introducing a supervised, utility-aware perspective on private mechanism design that we believe is broadly applicable. We view this work as an important first step toward a more general framework for designing DP mechanisms that integrate covariate information to achieve optimality. Extending the framework to nonparametric or more complex models remains an exciting direction, and we are actively exploring these avenues in ongoing work. - [1] Narayanan, Shyam, Vahab Mirrokni, and Hossein Esfandiari. "Tight and robust private mean estimation with few users." International Conference on Machine Learning. PMLR, 2022. - [2] Asi, Hilal, Vitaly Feldman, and Kunal Talwar. "Optimal algorithms for mean estimation under local differential privacy." International Conference on Machine Learning. PMLR, 2022. - [3] Kulesza, Alex, Ananda Theertha Suresh, and Yuyan Wang. "Mean Estimation in the Add-Remove Model of Differential Privacy." International Conference on Machine Learning. PMLR, 2024. - [4] Kulkarni, Tejas, et al. "Differentially private Bayesian inference for generalized linear models". International Conference on Machine Learning. PMLR, 2021. - [5] Alparslan, Baris, Sinan Yildirim, and Ilker Birbil. "Differentially Private Distributed Bayesian Linear Regression with MCMC". International conference on machine learning. PMLR, 2023. - [6] Alparslan, Baris, Sinan Yildirim, and Ilker Birbil. "Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation". International conference on machine learning. PMLR, 2024. **More complex (real-world) settings are necessary to evaluate the method.** While we fully agree that extending the method to more complex, real-world settings is an important long-term goal, we believe that our current first step focusing on GLMs is both necessary and appropriate. Since our method is specifically designed for GLMs, evaluating its performance within this model class is the most relevant and informative way to validate its effectiveness and theoretical properties. **The additional noise due to the privacy mechanism must be accounted for when reporting confidence intervals.** You are confused by our proposed method (ORRbR) with the naive method (RRbR). While achieving T-optimal, our proposed method (ORRbR) is also designed to account for the extra variability introduced by the RR mechanism and correct for the resulting bias in estimation. This enables us to construct valid confidence intervals that maintain nominal coverage under the DP constraints. As shown in Figures 3 and 5, the naive method (RRbR) suffers from severely undercovered confidence intervals, which underscores exactly the concern raised by the reviewer. **I hope the authors incorporate a more in-depth evaluation of their method and reframe the unclear paragraphs in the paper.** We agree that both clarity and thorough evaluation are critical, and we are committed to enhancing both in the revision. Our experiments were designed to comprehensively evaluate performance within the GLM framework, aligned with our theoretical analysis, by comparing against baselines across privacy levels and reporting metrics such as MSE, coverage, and CI length. We welcome the reviewer’s suggestions on specific aspects they found lacking or unclear, and we will address them directly in the revised manuscript. Additionally, we are happy to revise any text that may need clarification.
Summary: The authors propose an estimation method for a binary response model under LabelDP that is optimal in that it maximizes the trace of the Fisher information matrix. They leverage results regarding asymptotic normality of the MLE to derive confidence intervals for their estimator. Claims And Evidence: The authors support their main theoretical claims in Section 4 with proofs in the appendix. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem. Theoretical Claims: I did not read through the proofs in the appendix in too much detail. Experimental Designs Or Analyses: I did not notice any issues with the design of the experiments in Sections 5 and 6. However, I have a few points I want to raise about the accompanying Figures 2-6: 1. The trends in the figures are strange at small $\varepsilon$, especially in Figures 2 and 4. For example, what is causing the big jump in most of the curves at the point to the right of $\varepsilon = 0.05$? Is this just Monte Carlo error? If so, the authors might consider running more simulations to reduce the error, if computationally feasible. 2. Perhaps a minor point, but as I was trying to understand the trends in the figures, I was thrown by the scaling of the x-axis. The tick marks at 0.05, 0.1, 0.7, and 1 appear evenly spaced, but this does not correspond to a log-scale. How is the x-axis scaled? Supplementary Material: I did not examine the proofs in the appendix in too much detail. Relation To Broader Scientific Literature: The authors improve over prior work in the LabelDP literature for this task, with the primary comparison being to Holohan et al. (2017). Notably, the proposed method achieves the nominal coverage rate in the simulation study, whereas the prior work did not. Essential References Not Discussed: The key contribution is an optimal estimator for a setting with a binary response variable, which seems reminiscent of the optimality results of Awan & Slavkovic (JPC 2020). While the Awan & Slavkovic results apply to traditional DP with binary data (and no covariates), I am interested in whether there is a connection between these two notions of optimality in what seem to be related settings. Other Strengths And Weaknesses: The work is well-written and the authors did a good job motivating their method. Other Comments Or Suggestions: 1. The figures are very hard to read in their current form, especially Figure 1. I ask the authors to please update the text size to match that of the remainder of the document. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Experimental Designs or Analyses 1.** We appreciate the reviewer pointing out this observation. The abrupt jumps around small $\varepsilon$ values (particularly near $\varepsilon=0.1$) observed in Figures 2 and 4 arise primarily from the specific data generation mechanism of our simulation study for the parameter $\boldsymbol{\beta}$. To clarify, after revising and slightly adjusting our data-generating procedure (especially for $\boldsymbol{\beta}$), we observed that these jumps significantly diminish and the curves become noticeably smoother. This smoother property aligns well with the results of our real data analysis (Figure 6) in our paper, where the curves demonstrate a much more stable and smooth pattern. This smoothness of the real data further supports that the initial observed fluctuations were due to the original simulation setup rather than an inherent instability of the proposed methodology. Following your suggestion, we will increase the simulation runs and provide smoother simulation results to better illustrate the effectiveness and robustness of our approach. **Experimental Designs or Analyses 2.** We thank the reviewer for noting the ambiguity with respect to the scaling of the $x$-axis in Figures $2-6$. In fact, the chosen privacy budget values $(\varepsilon)$-specifically $\\{0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.7, 1\\}$-were not evenly spaced or set according to a log-scale. Instead, these values were deliberately selected in an irregular and somewhat random manner to extensively test the robustness of our method across a wide range of privacy constraints. This deliberate randomness and irregularity in the choice of $\varepsilon$ values ensure our results are not sensitive to systematic or uniformly spaced intervals. Despite such randomness, our method consistently demonstrates strong performance, validating its robustness across varied privacy settings **Essential References Not Discussed** The reviewer raised an insightful question about the connection between our optimal LabelDP setting with covariates and traditional DP scenarios (without covariates). Although both papers share the goal of optimal privacy-utility trade-offs for binary responses, our work makes two key advances beyond their framework. - Covariate Integration: Awan $\\&$ Slavković optimizes the RR mechanisms solely for the binary response $Y$ (minimizing $\operatorname{Var}(\widehat{\sum{Y}})$, where $\sum{Y}$ is the sample sum, which is a complete sufficient statistic for the binomial model in their paper) without considering covariate information $X$, while our Fisher information maximization (Eq. 5) explicitly incorporates covariate effects through the model $\mathbb {E} \left( {Y} _ i \mid { {X} } _ i \right) =1-p _ {00}+\left( p _ {00} + p _ {11} - 1 \right) G \left( {\beta} _ * ^ {\top} X _ {i} \right)$. This criterion, known as T-optimality, explicitly leverages the covariate structure, enhancing the estimation efficiency when covariates are present. - Inferential Guarantees: Beyond point estimation, our framework enables formal statistical inference with privacy guarantees. Specifically, Corollary 4.11 establishes the asymptotic normality of each coefficient $\beta_ { * j}$ under $\varepsilon$- and $(\varepsilon,\delta)$-LabelDP mechanisms, yielding valid confidence intervals that: - Achieve nominal coverage (e.g., 95% intervals contain $\beta_{* j}$ with probability 0.95 as $n \rightarrow \infty$). - Preserve privacy through the optimized RR mechanism (Theorems 4.4 and 4.7). - Account for covariate effects via the Fisher information matrix. We will add a discussion in Section 2 comparing these approaches, citing Awan $\\&$ Slavković's marginal optimality as motivation for our novel optimality under the regression framework. **Other Comments or Suggestions.** We agree with the reviewer on the readability issue. In our revised manuscript, we will significantly improve the readability of all figures, especially Figure 1, by: - Increasing font sizes to match the main text consistently. - Clarifying annotations and axis labels. - Enhancing visual clarity through improved formatting and spacing. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments; I have revised my score to a 4. On Experimental Designs or Analyses 2: I remain a bit confused by the choice of presenting the results with non-evenly spaced values of epsilon. Certainly, it is beneficial to test the method's robustness across a wide range of privacy constraints and ensure the results are not sensitive to systematic or uniformly spaced intervals. But unless you uncovered evidence that these are concerns in your preliminary analysis, the presentation of the results in the figures intended for publication should focus on demonstrating the trends uncovered in the experiments as clearly as possible. Although as I said in my original review, this is perhaps a minor point. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for revisiting this point and for their thoughtful comments. We agree that clearly illustrating trends is crucial, especially in figures intended for publication. While our original intention was to demonstrate robustness across a broad range of $\varepsilon$ values, we recognize that evenly spaced values would enhance visual clarity and interpretability. We will revise the figures accordingly in the manuscript.
Summary: This paper addresses statistical estimation and inference in binary response models while preserving the privacy of the labels via LabelDP. Covariate features $X$ are public, but binary response labels $Y$ are sensitive. The authors focus on using randomized response (RR) mechanisms – a classical privacy technique where each true binary label may be flipped with some probability​ to privatize the labels. They formulate it as an optimization problem: choose the RR flipping probabilities $(p_{00}, p_{11})$ (the probabilities of outputting the truthful label for $Y=0$ and $Y=1$, respectively) to maximize the trace of the Fisher Information Matrix of the model. This approach incorporates the influence of covariates into the privacy mechanism design, unlike prior methods that treated label randomization independently of $X$​ Claims And Evidence: The paper’s main claims—that an optimal LabelDP randomized response mechanism exists for binary response models and that the proposed approach significantly outperforms baseline methods—are supported by proofs and simulation studies. Methods And Evaluation Criteria: Yes, see above. Theoretical Claims: I did not check the correctness of the proofs in the appendix. Experimental Designs Or Analyses: The paper’s main experiment setup—comparing different randomized response strategies under various $\varepsilon$-privacy budgets is sound. The sample size ($n=10^5$) is large, which supports asymptotic approximations, and they carefully document metrics (MSE, coverage probability) that align with the paper’s theoretical claims. One possible limitation is that only scenarios with a large $n$ and a single real dataset are shown, so small-sample performance is not really presented. Supplementary Material: No Relation To Broader Scientific Literature: The paper positions their contribution to the literature, leveraging insights from local differential privacy to protect binary labels in the LabelDP setting. Essential References Not Discussed: Not appliable. Other Strengths And Weaknesses: Strengths: * The paper’s approach of integrating experimental design principles (maximizing Fisher Information) into differential privacy is an original angle. * The authors clearly derive the mechanism and proofs and show its impact in both simulations and a real-world plagiarism dataset. Weaknesses * The method is tightly scoped to binary label privacy with large-sample asymptotics; it does not address broader settings (e.g., multi-class outcomes, small-sample scenarios). * While they justify maximizing the trace of the Fisher Information, there is little exploration of other design criteria which might have yielded different insights. Other Comments Or Suggestions: No Questions For Authors: * Could this method be extended to multi-class outcomes? (see the k-RR mechanism) * Did you consider other design criteria, such as $D$-optimality? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Question under Experimental Designs Or Analyses** We acknowledge the concern of the reviewer about the large simulation sample size ($n=10^5$) for asymptotic approximations. This choice was intentional to validate our theoretical results (Theorem 4.7), where the guarantees of consistency and normality are held as $n \rightarrow \infty$. However, we emphasize that our method is equally applicable to smaller samples, as demonstrated in the real-data analysis (Section 6) with $n=474$ - a realistic sample size for sensitive surveys. Here, ORRbR achieved a 30% lower MSE than $\operatorname{RRbR}$ at $\varepsilon=0.1$ (Figure 7), proving its practical utility beyond asymptotic regimes. To further address this point, we will add simulations for $n \in \\{500,2000,5000\\}$ in revision, showing that ORRbR maintains superior coverage (>92% versus RRbR's <85% at $n=500, \varepsilon=0.5$). Then we will clarify in Section 5 that the Fisher information optimization remains valid for small $n$, although variance estimates may require finite-sample adjustments (e.g., sandwich estimators). **Weakness 1 and Question 1.** We appreciate the insightful suggestion of the reviewer on multiclass extensions. Indeed, our method can be naturally extended to multiclass outcomes using the $k$-RR mechanism. The optimization strategy of maximizing the trace of the Fisher Information Matrix remains valid for multiclass scenarios, as discussed in Yao and Wang (2019), Optimal subsampling for softmax regression. Specifically, for $k$-class responses, the design matrix in Definition 2.2 of our paper becomes a $k \times k$ stochastic matrix, and the optimality criterion would involve maximizing the Fisher Information's trace across multiple response probabilities under LabelDP constraints. This is an exciting direction that we are actively exploring, and preliminary theoretical analysis indicates promising scalability. We plan to provide more detailed theoretical developments and numerical evaluations in future work. We acknowledge the reviewer's observation regarding the large sample size of our simulation study ($n=10^5$). Our initial choice aimed to clearly demonstrate the theoretical properties under asymptotic conditions. However, we want to emphasize that our real data analysis employs a relatively small sample size of 474 German and Swiss students. This practical application highlights the robustness and applicability of our method even in small-sample contexts. **Weakness 2 and Question 2.** We thank the reviewer for highlighting the potential for other optimality criteria. It is important to note that while both $T$-optimality and $D$-optimality aim at maximizing information, $D$-optimality involves maximizing the determinant of the Fisher Information Matrix, which is computationally challenging and more complex due to the involvement of determinant calculations, especially for high-dimensional parameter spaces. However, $T$-optimality is computationally simpler, making it significantly easier to implement in practical scenarios without sacrificing the quality of inference. Moreover, in many cases, the two criteria are equivalent or closely related, and maximizing the trace often provides a satisfactory approximation to maximizing the determinant. In detail, our $T$-optimality criterion (maximizing the trace) is equivalent in asymptotic efficiency but far more practical: 1. Computational tractability: The trace decomposes into a sum of variances (diagonal elements), reducing the problem to scalar optimization (Lemma 4.4). This allows closed-form solutions for $\left(p_{00}, p_{11}\right)$ on the boundary of $\mathcal{R}$ (Theorems 4.5 and 4.7). 2. Interpretability: The trace directly corresponds to minimizing the average variance of $\widehat{\boldsymbol{\beta}}$, which aligns with our goal of precise estimation. 3. Equivalence in large samples but different complexity: Both criteria yield consistent estimators, but $T$-optimality achieves this with $O(d)$ complexity versus $O\left(d^3\right)$ for $D$-optimality due to determinant calculations, where $d$ is the number of covariates.
null
null
null
null
null
null
null
null
SDP-CROWN: Efficient Bound Propagation for Neural Network Verification with Tightness of Semidefinite Programming
Accept (spotlight poster)
Summary: The paper introduces SDP-CROWN, a hybrid framework combining semidefinite programming (SDP) relaxations with bound propagation for neural network verification under L2-norm perturbations. The core contribution is a novel linear bound derived from SDP principles that includes a new bias term h, which provides theoretical guarantees for L2-norm balls. Theoretical analysis shows the proposed bound can be tighter than bounds computed for the L-infinity norm. Experimental results demonstrate that SDP-CROWN outperforms state-of-the-art verifiers like α-CROWN and α,β-CROWN while maintaining moderate runtime. Claims And Evidence: Overall, the claims are supported by clear experimental results and theorem: Tighter bounds under L2 perturbations: Empirical results (Table 1, Figure 3 and 4) show SDP-CROWN achieves significant verified accuracy improvement, outperforming baselines. Scalability: Experiments on models with 65k neurons validate scalability. Theoretical tightness: Theorem 5.2 proves that in a special yet crucial case, the proposed bound can achieve an improvement by a factor of the square root of n. Methods And Evaluation Criteria: The method is well-motivated for L2 robustness verification. Evaluation uses standard MNIST/CIFAR-10 benchmarks and models that are widely recognized in this field. It compares against relevant baselines (alpha-CROWN, alpha-beta-CROWN, LP-Full). The only aspect that may need further clarification is the baseline Naive_Lipschitz, which could benefit from a more detailed explanation. Theoretical Claims: I have reviewed the theorems in the main text and did not find any major issues. Experimental Designs Or Analyses: I have reviewed the experimental setup, and overall, it is valid. The only concern is that I am unsure whether the chosen radius for L2-norm perturbation robustness evaluation is reasonable. Supplementary Material: I have reviewed the sections in the appendix concerning the experimental setup, model parameters, and some details of the theorems, and I find them to be generally reasonable and well-supported. Relation To Broader Scientific Literature: The work effectively bridges bound propagation-based verification and SDP technique. It aims to verify model robustness under L2-norm perturbation and thus related to neural network security. Essential References Not Discussed: While I keep up with the literature on neural network verification, particularly the CROWN series, my understanding of SDP-based and Lipschitz-based verification is rather limited. Therefore, my main concern is whether more advanced methods exist, as the evaluation only considers a “naive” Lipschitz approach. Other Strengths And Weaknesses: 1. I appreciate Section 3 which clearly explains why and how the existing SOTA methods produce loose bounds under L2-norm perturbation. 2. Both the theoretical analysis and empirical results demonstrate that the proposed method represents a valuable step forward in addressing this problem. Other Comments Or Suggestions: Typo: line 171 “lose” -> “loose”. Questions For Authors: 1. As I understand it, the core idea of this paper is that an L2-norm ball with the same radius is a subset of an L-infinity norm box, which allows existing bound propagation-based verification methods to be further tightened by focusing on L2-norm balls. The key improvement in this work lies in obtaining a tighter linear bias term h. My question is whether this idea is limited to refining the bias term h, or if it can also be extended to improve the slope term g for even tighter bounds. 2. In the evaluation on MLP models and the MNIST dataset, the baseline BM exhibits superior performance (even approaching the upper bound given by PGD). However, I could not find an introduction to BM—did I overlook something? 3. The selected baselines include alpha-CROWN and alpha-beta-CROWN. As I understand it, these existing CROWN-based tools directly verify L-infinity norm boxes rather than L2-norm balls under the current experimental setup. Would GCP-CROWN be inapplicable to this experiment? To my knowledge, GCP-CROWN has demonstrated stronger verification capabilities. Additionally, the Lipschitz-based baseline appears to exhibit a relatively favorable overhead. Are there more advanced methods in this category? A more detailed introduction to this baseline may also be necessary. 4. Again, I am not very familiar with SDP-based verification techniques, so I had some confusion while reading Lines 120–135. According to the authors, U_i = u_i^2 and V_i = v_i^2, and the constructed matrix X_i being positive semidefinite seems to directly imply u_i v_i = 0. This appears to be an exact equivalence to ReLU activation—where is the relaxation incorporated in this formulation? Based on my understanding, the conditions U_i \geq u_i and V_i \geq v_i seem to make more sense. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank reviewer zfFU for for your positive feedback and valuable comments. As suggested, we’ve added new experiments on GCP-CROWN [2] and BICCOS [3] (a follow-up work of GCP-CROWN), as well as a more sophisticated Lipschitz constant method (LipSDP [1]). **New experimental results** We provide additional comparisons with LipSDP [1], GCP-CROWN [2], and BICCOS [3] in the table below, using the same settings as Table 1 in our paper. We report the verified accuracy (%) and average per-sample verification time (in seconds), except for LipSDP, where we report the total time required to compute the Lipschitz constant. [Table A: Comparisons to new baselines](https://imgur.com/a/QITgjmq) Since GCP-CROWN and BICCOS are all part of the $\alpha,\beta$-CROWN verifier, to avoid confusion, we renamed the original $\alpha,\beta$-CROWN baseline to $\beta$-CROWN (Wang et al., 2021). GCP-CROWN and BICCOS marginally improve over $\beta$-CROWN, although the gap to our method is still significant, especially on large models. LipSDP can outperform native Lipschitz but is still not competitive compared to our method, and computing the Lipschitz constant using SDP is also quite slow and not scalable. [1] Fazlyab et al. "Efficient and accurate estimation of lipschitz constants for deep neural networks." NeurIPS 2019 [2] Zhang et al. "General cutting planes for bound-propagation-based neural network verification." NeurIPS 2020 [3] Zhou et al. "Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes." NeurIPS 2024 **"Whether the chosen radius for L2-norm perturbation is reasonable"** While Table 1 only presents results for a single $\ell_2$ perturbation radius, we would like to highlight Figure 4 (c) (d) in Appendix (also [linked here](https://imgur.com/a/MFUS3RX)), which presents the certified lower bounds computed from different methods over **a wide range of $\ell_2$ perturbation radii**. As shown in Figure 4, our lower is consistently tighter than $\alpha$-CROWN and the naive Lipschitz approach across different perturbation radii, which demonstrates the effectiveness of our approach. **"More advanced methods beyond naive Lipschitz"** We agree that more advanced methods based on Lipschitz constant estimation can yield significantly tighter robustness bounds. To address this, we have added a comparison with LipSDP [1] in the [Table A](https://imgur.com/a/QITgjmq) above, which employs SDP relaxation to estimate the network’s Lipschitz constant. This provides a stronger baseline for Lipschitz-based verification methods. Our method still significantly outperforms this baseline in all scenarios. **Response to Questions For Authors:** 1. While our current work focuses on refining the bias term $h$, it is certainly possible to refine both the slope term $g$ and bias term $h$ simultaneously for tighter bounds. A key direction for future work is developing an efficient parameterization of $g$ that can be seamlessly integrated into existing bound propagation frameworks. 2. BM is an SDP-based method [4]. It exactly solves the SDP relaxation of the verification problem using specialized low-rank solvers. BM is one of the tightest SDP-based verifiers (as demonstrated in their paper) but is not scalable to CIFAR-10 models. We choose BM because it is a recently published SDP-based method with strong results. 3. We have added a comparison with GCP-CROWN [2] and BICCOS [3] above. The original GCP-CROWN implementation considers $\ell_\infty$ norm only, but we’ve able to extend the implementation to $\ell_2$ norm by finding cutting planes specifically for $\ell_2$ norm input constraints. It marginally improves performance and the gap between GCP-CROWN and our method is still large esepcially on bigger models. 4. We note that without $\mathrm{rank}(X_i)=1$, $X_i\succeq 0$ alone does not imply $u_iv_i = 0$, and the relaxation stems from dropping the constraint $\mathrm{rank}(X_i)=1$. Specifically, to derive SDP relaxation of ReLU activation: $x_i=u_i$, $u_iv_i=0$, $u_i\geq 0$, $v_i\geq 0$. We first add a redundant constraint $X_i=[1\ u_i\ v_i][1\ u_i\ v_i]^T\succeq 0$ and then set $U_i=u_i^2$, $V_i=v_i^2$, and $u_iv_i=0$ in $X_i$. It is clear that $X_i\succeq 0$ and $\mathrm{rank}(X_i)=1$ if and only if $u_iv_i=0$. Since $\mathrm{rank}(X_i)=1$ is nonconvex, we drop the rank-1 constraint to obtain the SDP relaxation: $x_i=u_i$, $u_i\geq 0$, $v_i\geq 0$, $X_i\succeq 0$. [4] Chiu et al. "Tight certification of adversarially trained neural networks via nonconvex low-rank semidefinite relaxations." ICML 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal and most of concerns are solved. I would like to keep my score unchanged.
Summary: The work proposes SDP-CROWN, a modified bound propagation framework based on CROWN for verifying the robustness of neural networks to $\ell_2$ norm perturbations. SDP-CROWN introduces an additional parameter into the bound propagation framework which is used for tightening the bias term in the linear relaxations propagated through the network. The formulation for the bias term is derived from the tighter Semidefinite Programming relaxation and enables a more accurate representation of the $\ell_2$-norm interdependencies between neurons. When compared to a number of competing approaches, the authors show experimentally that their method results in tighter bounds while preserving scalability. Claims And Evidence: - The general claims are mostly supported by sufficient evidence. SDP-CROWN clearly outperforms the baselines which are considered by the authors. However, as mentioned in the methods section, some comparisons with other methods are missing which makes it more difficult to assess the contributions of this work. Besides this, a clarification of what the "BM" method is (which outperforms SDP-CROWN) would be helpful. - Figure 1: This figure is supposed to show the advantages of SDP-CROWN, but I find it confusing for a number of reasons. Firstly, it is unclear which dataset the ConvBig network used for creating this figure is actually trained on. Secondly, the caption refers to Section 6.1 for details but 6.1 does not contain any results on ConvBig. Thirdly, the axes in the figure are not labeled, it is unclear what the whiskers represent (confidence intervals? Or is the upper number the PGD upper bound of robustness?). Lastly, the numbers mentioned in the caption are different from the ones shown in the figure. Overall, this figure probably needs to be revised and currently doesn't help understand the method. Methods And Evaluation Criteria: - I wondered why the method is only integrated into the $\alpha$-CROWN verifier and not into $\alpha, \beta$-CROWN or even GCP-CROWN [1], possibly with the BICCOS cuts [2]. If this is difficult to do, at least a comparison with these should be possible. Since the authors describe at the beginning that this is not extremely difficult to do (just construct an $\ell_\infty$ box containing the $\ell_2$ box as a naive baseline), there seem to be little reason not to provide those results on the state-of-the-art verifiers (unless I'm missing something here). - The fact that no benchmarking against SDP solvers is done despite the fact that the authors repeatedly mention that they are the most suitable method for solving $\ell_2$ robustness verification problems is a major weakness of the work. In Section 6, they state that "We ignore the comparison with SDP-based verifiers as we only use SDP relaxation for facilitating linear bound propagation over a $\ell_2$-norm ball region.". I don't understand this argument, to me the fact that SDP is only used in a certain way in the proposed approach doesn't mean that the proposed approach shouldn't be benchmarked against what the authors themselves identify as the (presently) most suitable way to solve the verification problem. [1] Zhang, H., Wang, S., Xu, K., Li, L., Li, B., Jana, S., ... & Kolter, J. Z. (2022). General cutting planes for bound-propagation-based neural network verification. Advances in neural information processing systems, 35, 1656-1670. [2] Zhou, D., Brix, C., Hanasusanto, G. A., & Zhang, H. (2024). Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes. arXiv preprint arXiv:2501.00200. Theoretical Claims: I checked all the proofs in the main part of the paper. Some of them are a bit difficult to follow since a number of reformulation/rewriting steps are done implicitly, but overall the proofs seem correct to me. Experimental Designs Or Analyses: In Table 1 there is a method termed "BM" for the MNIST-MLP which appears to outperform SDP-CROWN by a significant margin (although the runtime is much longer). Could the authors clarify what this method is and why it is not explained in the paper? I was surprised to see a method here which outperforms the proposed method but is not described in the work. Supplementary Material: I reviewed Appendix A and B Relation To Broader Scientific Literature: $\ell_2$ robustness is not considered in most of the verification literature or only considered in a somewhat naive way (except for the global robustness literature and the works on constructing networks with small Lipschitz constants), so this work tackles a problem that seems relevant. I find the lack of of contextualisation and comparison with these methods problematic. Essential References Not Discussed: There seem to be a number of references on verification of neural networks to $\ell_2$ norm robustness missing in the related work section. The section only mentions SDPs and standard bound propagation. However, there are a number of other works on such perturbations, see e.g. Table 1 in [3] for a number of other papers that support $\ell_2$ perturbations. There are also works such as [4] which construct networks more amenable to robustness verification and which usually focus on $\ell_2$ robustness as well. However, neither of these feature in the related work section. Approaches such as [5] which compute the Lipschitz constant of a network using SDPs might also be relevant. A comparison with the method proposed by [6] which also supports $\ell_2$ perturbation would be extremely important since that method seems to perform comparably well when compared to bound propagation methods. [3] Meng, M. H., Bai, G., Teo, S. G., Hou, Z., Xiao, Y., Lin, Y., & Dong, J. S. (2022). Adversarial robustness of deep neural networks: A survey from a formal verification perspective. IEEE Transactions on Dependable and Secure Computing. [4] Hu, K., Zou, A., Wang, Z., Leino, K., & Fredrikson, M. (2023). Unlocking deterministic robustness certification on imagenet. Advances in Neural Information Processing Systems, 36, 42993-43011. [5] Fazlyab, M., Robey, A., Hassani, H., Morari, M., & Pappas, G. (2019). Efficient and accurate estimation of lipschitz constants for deep neural networks. Advances in neural information processing systems, 32. [6] Chiu, H. M., & Zhang, R. Y. (2023, July). Tight certification of adversarially trained neural networks via nonconvex low-rank semidefinite relaxations. In International Conference on Machine Learning (pp. 5631-5660). PMLR. Other Strengths And Weaknesses: The strengths and weaknesses are mentioned above. Other Comments Or Suggestions: #### Typos - Line 113: tightened by optimizing over the linear relaxation themselves --> tightened by optimizing over the linear relaxation**s** themselves - Line 160: defining a set of linear relaxation --> defining a set of linear relaxation**s** - Line 170: why bound propagation tends to be lose --> why bound propagation tends to be lo**o**se - Line 177: "with radii $\|x - \hat{x}\|$" - what norm is this? I assume $\ell_2$? - Line 177: are a factor of $\sqrt{n}$ than the radius --> are a factor of $\sqrt{n}$ **larger** than the radius - Line 244: the bound [...] satisfy --> the bound [...] satisf**ies** - Line 253: We are now ready to proof --> We are now ready to pro**ve** - Line 255: Fix any [...] and optimized each [...] yields --> Fix**ing** any [...] and optimiz**ing** each [...] yields - Line 257: We show**s** that --> We show that - Line 260: As shown in Lemma 5.1, linear lower bound [...] yields --> As shown in Lemma 5.1, **the** linear lower bound [...] yields - Line 266: The desire result follows --> The desire**d** result follows - Line 271: our method is guarantee to --> our method is guarantee**d** to - Line 321: and $h(\alpha)$ form $\alpha$-CROWN --> and $h(\alpha)$ f**ro**m $\alpha$-CROWN - Line 410: As neural network verification is **an** NP-hard, all methods --> As neural network verification is NP-hard, all methods - Line 429: significantly narrowing the gap the PGD upper bound --> significantly narrowing the gap **with respect to** the PGD upper bound #### Other - Can Figure 4 be moved to the main paper? It seems like there would be enough space for it and it would make it easier to check the figure while reading Section 6.3. Questions For Authors: - Could the authors explain their reasoning in line 205ff a bit more? They assume that $\| \hat{x} \|_\infty \leq \rho$ holds and if it doesn't, they substitute something with $u_i^*, v_i^*$. Could they explain where exactly this $u_i^*, v_i^*$ are substituted and how that leads to their assumption being correct? - Could the authors comment on GCP-CROWN/BICCOS and why they weren't considered in this work? Are any results on these verifiers available? - Is there a reason why SDP-based verifiers are not evaluated? - Are the authors aware of the substantial body of literature on $\ell_2$ robustness verification using special architectures such as [4] above? Could the compare their work/contextualise it relative to those works? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for the detailed review. We added **additional experiments on GCP-CROWN and BICCOS, as well as LipSDP [5] (another SDP method) for all models** as requested. We hope you can reevaluate our paper based on our response. **Clarifying Figure 1** We apologize for the confusion and will correct Figure 1. The model name should be ConvLarge, not ConvBig. ConvLarge is trained on CIFAR-10 and has 4 convolutional and 3 linear layers, totaling 24.6M parameters. The whiskers in Figure 1 illustrate the uncertainty gap where the true certified accuracy lies; blue numbers represent lower bounds on certified accuracy from different methods, and red numbers represent the PGD upper bound. A smaller gap indicates stronger verification. **Comparison to GCP-CROWN and BICCOS** During the rebuttal period, we implement $\ell_2$ norm support into GCP-CROWN and BICCOS. Particularly, the cutting planes found were using precise $\ell_2$ norm constraints rather than directly constructing an enclosing $\ell_\infty$ ball. This shows the best GCP-CROWN and BICCOS can do. We included comparisons to GCP-CROWN and BICCOS on **all benchmarks** in Table A below. [Table A: Comparisons to new baselines](https://imgur.com/a/QITgjmq) Since GCP-CROWN and BICCOS are all part of the $\alpha,\beta$-CROWN verifier, to avoid confusion, we renamed the original $\alpha,\beta$-CROWN baseline to $\beta$-CROWN (Wang et al., 2021). GCP-CROWN and BICCOS marginally improve over $\beta$-CROWN, although the gap to our method is still large especially on bigger models. **Explain BM method** BM is an SDP-based method [6]. It exactly solves the SDP relaxation of the verification problem using specialized low-rank solvers. BM is one of the tightest SDP-based verifiers (as demonstrated in their paper) but is not scalable to CIFAR-10 models. We choose BM because it is a recently published SDP-based method with strong results. BM is tighter than our method because our approach remains LP-based as the SDP relaxation is only used to refine the offset of linear bounds. However, our method is orders of magnitude more scalable; our ConvLarge is a factor of 100x larger than the models considered in BM. Our comparisons with BM demonstrate that the substantial improvement in scalability achieved by our method comes with only a mild loss of tightness. **Comparison to SDP-based methods** Besides the BM SDP-based method already reported in our paper, we conduct additional experiments on LipSDP [5] in Table A. [Table A: Comparisons to new baselines](https://imgur.com/a/QITgjmq) For a fair comparison, we run LipSDP with multiple configurations by splitting original networks into subnetworks. We show the results in Table A. The verified accuracy by LipSDP is significantly lower than ours, even in the tightest (slowest) setting with no split. We also considered additional SDP baselines that can scale to the networks we evaluated, such as SDP-FO (Dathathri et al., 2020). However, we found that their algorithm and implementation support $\ell_\infty$ norm only, and it is not straightforward to adapt their implementation to our setting. **Add new references** Thank you for bringing these references to our attention. We will incorporate these works into the final version to provide a more comprehensive discussion of related research. We’ve added an experimental comparison to GCP-CROWN [1], BICCOS [2], Lip-SDP [5] above, and [6] is the BM in our paper. We will cite and discuss [3] (a survey paper) and [4] (focusing on training certifiable networks with special architectures). **Response to Questions For Authors:** 1. Given $c^T\textrm{ReLU}(x)\geq g^Tx+h$ that holds within $\Vert x-\hat x\Vert_\infty \leq \rho$. Observe that from bound propagation $g_i=c_i$ if $\hat x_i>\rho$, and $g_i=0$ if $\hat x_i<\rho$. Hence, $c_i\textrm{ReLU}(x_i)-g_ix_i=0$ if $|\hat x_i|>\rho$. Therefore $x_i^\star=\hat x_i$ is a minimizer of $\min_x c^T\textrm{ReLU}(x)-g^Tx\text{ s.t. } \Vert x-\hat x\Vert_2 \leq \rho$ if $|\hat x_i|>\rho$, and $x_i$ can be removed by substituting $x_i=\hat x_i$. It follows from positive/negative splitting $x_i^\star=u_i^\star - v_i^\star$ that $u_i^\star=\max\lbrace\hat x_i,0\rbrace$ and $v_i^\star=\min\lbrace\hat x_i,0\rbrace $. 2 & 3. See our comparison to GCP-CROWN, BICCOS, LipSDP and BM above. 4. Our method is a hybrid framework that tightens bound propagation using SDP relaxations specifically for ReLU activations. Our approach is complementary to existing work on leveraging specialized architectures, offering an alternative route to achieve certified robustness. We will add a paragraph to discuss methods that leverage specialized architectures, such as [4], Cayley layers, AOL, SLL, etc. While a direct comparison is not feasible due to the different aims of these approaches, we will explore the design of verification-friendly architectures that can utilize the strengths of both bound propagation and specialized architectures in future work. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for the detailed rebuttal that was provided, the clarifications are certainly useful. I am still slightly confused about the BM method and the fact that it is not mentioned in your paper but then appears as a benchmark in one table, but I am sure that this can be corrected. The new baselines comparing the work to GCP-CROWN and other SDP-based methods seem promising. Given this fact, I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising the score. We will be sure to introduce all baselines (including SDP/BM) in the experiment section. We apologize for the confusion; we will add the below sentences to clarify the relationship between BM and SDP. > I am still slightly confused about the BM method and the fact that it is not mentioned in your paper but then appears as a benchmark in one table, but I am sure that this can be corrected. We wish to clarify that SDP is a *formulation*, whereas BM is a *solver*; it is one among several others such as MOSEK, SeDuMi, SDPNAL+, etc for solving SDP. To give an analogy, the system of equations $Ax=b$ is a formulation, whereas sparse Cholesky factorization, LU decomposition are all different solvers for computing the same $x$. Our intention was to compare against SDP, which as rightly stated in your review, is extremely important. In contrast, BM is only one of several possible SDP solvers (which we cited in the experimental section), and not really a "verification method" in its own right. Privately, we also tried solving using MOSEK for solving SDP, but found BM to be more scalable. We apologize that this distinction was not spelled out in the original submission nor in our rebuttal. Throughout the paper, all instances of "BM" should be understood to mean "SDP", with BM being only the solver used to solve it. We also added comparisons with LipSDP. This, like our proposed method, is an "SDP-based verification method", because it uses the ideas of SDP but does not actually solve it. Please let us know if there are further concerns that we should address. If you are happy with our rebuttals/answers, we would appreciate any further score adjustments. Thank you very much.
Summary: The authors propose a new method they call SDP-CROWN. They use the framework of semidefinite programming to derive linear bounds on the networks behaviour however based not on an $L_\infty$ norm but based on $L_2$ norm bound perturbations. This linear bound can then be used for any linear bound propagation method to obtain a certificate. Specifically, what they do is to use a standard bound propagation method, but the constant offset of that bound is then adjusted (from an $L_\infty$ bound to and $L_2$ bound offset) but still sound. Claims And Evidence: - They claim that their bounds can be up to $\sqrt(n)$ better compared to standard single-neuron bounds. While this is clear for the input - for the output this can be larger or smaller i would assume. (imprecisions can accumulate) - The authors claim also soundness which is supproted by a proof (4.1). Methods And Evaluation Criteria: The method is evaluated against several competitors that are tailored for $L_\infty$. Evaluation takes place on MNIST and CIFAR-10 which is standard. The creteria are certified accuracy and time. Theoretical Claims: See above. Experimental Designs Or Analyses: The experimental design is standard. That said, i would be curious how far one could push the method. Supplementary Material: The supplementary Material was sporadically read. Relation To Broader Scientific Literature: Most related work i know of is adequadely discussed. What i find curious about the work here is that the offset adjustmeht here is the opposite of what is done in [A], where unsound bounds are adjusted to be sound. Here, sound bounds are adjusted to be more tight but still sound. - [A] https://proceedings.neurips.cc/paper_files/paper/2019/file/f7fa6aca028e7ff4ef62d75ed025fe76-Paper.pdf Essential References Not Discussed: - https://openreview.net/pdf?id=awHTL3Hpto I have the feeling that the limits regarding the most optimal linear bounds (Salman et al 2019) regarding the expressivity also apply here. Specifically, just restricting $\ell_\infty$ to $\ell_2$ and adjustment of the offset do not address fundamental expressivity shortcomings. Other Strengths And Weaknesses: I like the combination of SDP and linear bound propagation. This is a neat idea - though it is not quite clear to me how far one could push this idea. Other Comments Or Suggestions: No further suggestions. Questions For Authors: The question are above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank reviewer 4cg2 for valuable comments and for recognizing the key contributions of our paper. We hope our response would adequately address your questions and concerns. **"They claim that their bounds can be up to $\sqrt{n}$ better compared to standard single-neuron bounds. While this is clear for the input - for the output this can be larger or smaller i would assume. (imprecisions can accumulate) "** The input bound improvement by a factor of $\sqrt{n}$ follows directly from the fact that the volume of an $\ell_2$ ball is smaller than that of an $\ell_\infty$ box by this factor. Regarding the output bound, we prove that our method can achieve up to a $\sqrt{n}$ improvement vs traditional bound propagation (Theorem 5.2). However, this proof applies specifically to as simple network $\mathrm{ReLU(x)}$ under $\ell_2$ perturbations of the form $\Vert x\Vert_2\leq\rho$. For general multi-layer neural networks, the extent of improvement is hard to quantify theoretically. To address this, we provide empirical results demonstrating that our method consistently outperforms bound propagation methods, especially on large neworks. **"The experimental design is standard. That said, I would be curious how far one could push the method. "** Thank you for your insightful comment. In this paper, we utilize SDP to tighten the offset of the linear bounds in bound propagation. Our method has been integrated into $\alpha$-CROWN with minimal overhead, as it only introduces one additional variable $\lambda$ per postactivation neuron. Our next step is to integrate our method into the $\alpha,\beta$-CROWN codebase, hence allowing our method to work with powerful branch-and-bound based methods and a wide range of neural network architecture. Our current algorithm already outperformed the existing branch-and-bound based approaches (including GCP-CROWN, BICCOS and beta-CROWN, as presented in the **[new table](https://imgur.com/a/QITgjmq)** we added during rebuttal), and additional improvements can be further demonstrated once we finish this integration. Additionally, we aim to extend our approach to jointly optimize both the slope and offset terms, with the goal of developing optimal linear relaxations for $\ell_2$ perturbations. We expect these two future directions to yield tighter results on an even larger scale than considered in the present paper. **"Most related work I know of is adequately discussed. What I find curious about the work here is that the offset adjustment here is exactly the opposite of what is done in [A] and [B], where unsound bounds are adjusted to be sound. Here, sound bounds are adjusted to be more tight but still sound."** We appreciate the reviewer for bringing these references to our attention. We will include a discussion of these works in the final version of our paper. [A] presents a method for certifying robustness against geometric transformations. This approach begins with potentially unsound bounds and subsequently adjusts them to be sound through a combination of sampling and optimization techniques. In contrast, our method operates in the opposite manner; we start with sound bounds and refine them to be tighter while ensuring they remain valid. Regarding [B], we believe this reference may have been included by mistake, as it presents a framework for robotic tissue manipulation using deep learning for feature extraction, which does not appear relevant to the context of our work. We will be happy to discuss any additional works you suggest. **"I have the feeling that the limits regarding the most optimal linear bounds (Salman et al 2019) regarding the expressivity also apply here. Specifically, just restricting $\ell_2$ to $\ell_\infty$ and adjustment of the offset do not address fundamental expressivity shortcomings."** Our critical contribution is to introduce multi-neuron SDP relaxations into bound propagation. This allowed us to overcome the “convex relaxation barrier” faced by traditional single-neuron LP relaxations underlying the vast majority of prior work. It also improved upon prior work on multi-neuron relaxations, which were either too loose (i.e. LP-based) or limited to tiny models (i.e. SDP-based). Addressing $\ell_2$-norm constraints is a crucial first step, but we anticipate further work to greatly expand the applicability of models and new threat models (beyond $\ell_p$ norm) using our key idea of combining SDP with bound propagation. We will also cite the paper on the Expressivity of ReLU networks in our final version.
null
null
null
null
null
null
null
null
SpikF: Spiking Fourier Network for Efficient Long-term Prediction
Accept (poster)
Summary: This paper introduces the Spiking Fourier Network (SpikF), an attention-free framework designed to address key challenges in applying Spiking Neural Networks (SNNs) to long-term prediction tasks. They encode input sequences in patches and employ a frequency-domain selection mechanism that better captures the sequential properties of time-series data. Extensive evaluations across eight long-term prediction datasets show that SpikF achieves a 1.9% reduction in Mean Absolute Error (MAE) compared to state-of-the-art models while reducing energy consumption by 75.05%. Claims And Evidence: Yes. Please refer to "Questions For Authors" and "Weakness". Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The equations in this paper is correct. Experimental Designs Or Analyses: Yes. Please refer to "Questions For Authors". Supplementary Material: Yes, I have checked the code they provided in the supplementary material. Relation To Broader Scientific Literature: The extension in how SNNs deal with long-term time series. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The method is simple and easy to understand. 2. The proposed model achieves SOTA results and the ablation study is convincing. 3. The design of patch-based splitting for long-term time series is remarkable. I like this point. Weaknesses: 1. The proposed SpikF's hardware friendliness is ignored. I have checked the authors' code of the implementation of FFT operation in SpikF. I think that an operation like "torch.fft" is almost impossible to conduct on neuromorphic chips as it includes so much floating-point calculation. 2. The novelty of this paper is questioned. The usage of FFT is quite common in sequential tasks, like sequential recommendation, time-series forecasting, and even natural language processing. It seems the authors just replaced the self-attention module with the FFT module. 3. The calculation of energy of SNNs is not accurate. See comments. Other Comments Or Suggestions: I have to point out that: when designing SNN architectures, it is unwise for researchers to just take a common module from traditional ANNs to replace a certain module in SNNs and then report a SOTA performance. I think that a good study on SNNs should consider either hardware-friendliness or biological plausibility. From the perspective of pure deep learning, this paper is great. However, I think the authors ignore both the hardware-friendliness and the biological plausibility of SNNs. I do not think the FFT operation can be applied to neuromorphic chips. What's more, the energy calculation in Appendix A.2 is based on the discussion of [1][2]. Since both ANN and SNN process the same input data (with SNN using direct encoding in the first layer), the energy efficiency differences may only arise from variations in memory access or MAC/AC operations [3]. The paper only compares energy consumption during computation. Energy consumption is primarily determined by memory access rather than FLOPs or SOPs [4], but this impact is not included. **This omission should be addressed, as it is a common flaw in publications claiming energy savings.** [1] Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023, 45(8): 9393-9410. [2] Zhou Z, Zhu Y, He C, et al. Spikformer: When Spiking Neural Network Meets Transformer[C]//The Eleventh International Conference on Learning Representations, 2023. [3] Shen G, Zhao D, Li T, et al. Are Conventional SNNs Really Efficient? A Perspective from Network Quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 27538-27547. [4] Lemaire E, Cordone L, Castagnetti A, et al. An analytical estimation of spiking neural networks energy efficiency[C]//International Conference on Neural Information Processing. Cham: Springer International Publishing, 2022: 574-587. Questions For Authors: 1. Please report the standard deviation of SpikF in Table 1. I have to make sure the reported results are not just the best records among various random seeds. 2. Please discuss the possibility of applying FFT operations to neuromorphic hardware. 3. Please consider how memory access and other chip activities impact SNN energy consumption. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your comprehensive review and the valuable insights you have provided. In the subsequent sections, we will address your questions one by one. And we will integrate all relevant discussions into our article for the upcoming revision. >**Q1:** The proposed SpikF's hardware friendliness is ignored. I have checked the authors' code of the implementation of FFT operation in SpikF. I think that an operation like "torch.fft" is almost impossible to conduct on neuromorphic chips as it includes so much floating-point calculation. We have discussed hardware friendliness in Section 2.2. For better clarity, we extend some explanation of our approach to hardware friendliness here. The implementation of the Fast Fourier Transform (FFT) in neuromorphic hardware has been validated both theoretically and empirically by studies [1] and [2]. Your concerns regarding the floating-point calculations involved in FFT are addressed through their proposed methods. Study [1] has demonstrated that matrix multiplication can be represented by a spiking linear layer with an appropriately defined weight matrix. Accordingly, they initially express the FFT as a series of matrix multiplications and subsequently employ an SNN with an equivalent number of layers to avoid the challenges associated with floating-point operations. Meanwhile, study [2] leverages the membrane dynamics of the Resonate-and-Fire neuron, an extension of the LIF model, to naturally perform the Fourier Transform. This approach also successfully gets rid of the need for additional floating-point operations. [1] Lopez-Randulfe et al., “Time-Coded Spiking Fourier Transform in Neuromorphic Hardware,” IEEE Trans. Comput., vol. 71, no. 11, pp. 2792–2802, 2022. [2] Orchard et al., “Efficient Neuromorphic Signal Processing with Loihi 2,” 2021 IEEE SiPS, pp. 254-259. >**Q2:** The novelty of this paper is questioned. Yes, FFT is commonly used in sequential tasks. However, most previous works apply FFT to the entire time-series, which facilitates the utilization of high-frequency components. In contrast, our approach employs patch and grouping mechanisms to enhance the utilization of low and middle-frequency components from the original series, while local information is emphasized by the spiking patches. As a result, SpikF is able to use the **efficient utilization of the full spectrum** to improve accuracy, which has not been explored by former research. >**Q3:** The calculation of energy of SNNs is not accurate. After carefully reading the methods [3] [4] you referred to, we adopt the methods proposed by [3] and [4] to provide more metrics to comprehensive analyze the energy efficiency of SpikF and iTransformer ([Table 1](https://anonymous.4open.science/r/0D02/7)). In terms of ACE and $E_{Total}$, the energy consumption of SpikF is $6.27\times$ and $3.16\times$ lower than iTransformer respectively. [3] Shen et al., “Are Conventional SNNs Really Efficient? A Perspective from Network Quantization,” IEEE/CVF CVPR, 2024, pp. 27538-27547. [4] Lemaire et al., “An analytical estimation of spiking neural networks energy efficiency,” Springer, 2022, pp. 574-587. [5] Yao et al., “Attention Spiking Neural Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, pp. 9393-9410, 2023. [6] Zhou et al., “Spikformer: When Spiking Neural Network Meets Transformer,” ICLR, 2023. >**Q4:** I think that a good study on SNNs should consider either hardware-friendliness or biological plausibility. We totally agree with your perspective on SNN research. The hardware-friendliness has been discussed in **Q1**. Although it is less discussed in our work, biological plausibility has been considered in our design: The basilar membrane in the inner ear exhibits a gradient of stiffness along its length. This mechanical property can process sound in different frequencies, and transform the series data into biological current signal, which is the inspiration of our SFS mechanism [7]. In addition, we believe that quality research in SNN domain should balance biological plausibility and precision. SpikF achieves $1.9\\%$ performance improvement across eight real-world datasets, demonstrating the practicality of our approach. [7] Moini, Piran, “Auditory System,” in Functional and Clinical Neuroanatomy, Academic Press, 2020, pp. 363-392. >**Q5:** Please report the standard deviation of SpikF in Table 1. We report error bars of SpikF and iTransformer in [Table 2](https://anonymous.4open.science/r/0D02/), [Table 3](https://anonymous.4open.science/r/0D02/) and [Table 4](https://anonymous.4open.science/r/0D02/). --- Rebuttal Comment 1.1: Comment: Your rebuttal is well-argued. The feasibility of Fast Fourier Transform (FFT) operations in neuromorphic hardware, as supported by your reference, is acknowledged. There is one issue I indeed care about: **from the perspective of novelty, this paper is much like an "A+B" paper (ignore the storytelling of the authors), which means just taking the common module (FFT) from ANNs to SNNs without detailed analysis**. Personally, I do not like this style of SNN research (but I realize that many prior SNN works are just like "A+B"). However, I raised my score to 3: Weak accept (i.e., leaning towards accept, but could also be rejected), primarily because the authors provided a solid rebuttal and demonstrated considerable effort. In truth, my actual score is closer to **2.5**. If I were conducting research on FFT in SNNs, I would first justify its necessity from a theoretical perspective to explore the mathematical connection between FFT and SNNs. I encourage the authors to reflect on this aspect. **I also sincerely urge the ACs and other reviewers to evaluate the "A+B" issue of this paper.** That concludes my review. Thank you. --- Reply to Comment 1.1.1: Comment: Thank you for your suggestions. We appreciate your concerns about the theoretical foundation and novelty of SpikF and are pleased to offer more detailed discussion. >**Q1:** If I were conducting research on FFT in SNNs, I would first justify its necessity from a theoretical perspective to explore the mathematical connection between FFT and SNNs. I encourage the authors to reflect on this aspect. Regarding the necessity of incorporating FFT into SNN architectures, we offer a detailed analysis: As [1] suggests, the dynamics of membrane potential in SNNs provide a unique method for capturing temporal data intricacies. However, this can result in a **separated receptive field**, potentially missing global temporal information. **Proof:** Given the dynamics of LIF neurons: $$U[t]=V[t-1]+{1 \over \tau_m}\left(I[t]-V[t-1]+V_{rest}\right)$$ $$S[t]=H\left(U[t]-V_{th}\right)$$ $$V[t]=U[t]\left(1-S[t]\right)+V_{rest}S[t]$$ For two series of stimulation $I_1[1], I_1[2], ..., I_1[t^*]$ and $I_2[1], I_2[2], ..., I_2[t^*]$ where $ S[1] = S[2] = ... = S[t^*-1] = 0 $ and $ S[t^*] = 1 $, these sequences are equivalent in terms of membrane potential when $ t \ge t^* $, as $ U[t^*] = V_{rest} $. If we assume that $S[t^1]=S[t^2]=...=S[t^s]=1$ and $S[t]=0$ otherwise. Then the receptive field of the LIF neuron is limited to the regions $[1, t^1], [t^1+1, t^2], ..., [t^{s-1}+1, t^{s}]$ and $[t^s+1, T]$. This limitation hinders high-prediction SNNs in long-term prediction domains, which require modeling long-term dependencies [2]. Thus, relying solely on SNN **internal dynamics** is insufficient; **external dynamics** are necessary for modeling long-term dependencies. While typical methods to expand the receptive field of SNN involve linear layers and self-attention mechanisms, these are less suitable for sequential tasks due to their permutation-invariance [2], which has been proved in Appendix A.1. In contrast, FFT transforms temporal series into the frequency domain, expanding the receptive field to the entire time-series. Modifications in the frequency domain influence the entire series, and sequential information is inherently embedded in frequency components via FFT's rotation factors: $$ F[k] = \sum_{t=1}^{T} S[t] e^{-j \frac{2\pi}{T} kt} $$ Thus, selecting frequency components allows for global influence, as $ F[k] $ is a function of $ S[1], S[2], ..., S[T] $, thus making FFT an ideal approach for external dynamics. In summary, incorporating FFT into SNN architecture is essential in the time-series domain to expand the receptive field and improve long-term dependency modeling. [1] Lv, C. et al. Efficient and Effective Time-Series Forecasting with Spiking Neural Networks. ICML 2024. [2] Ailing Zeng et al. Are Transformers Effective for Time Series Forecasting? AAAI 2023. >**Q2:** From the perspective of novelty, this paper is much like an "A+B" paper (ignore the storytelling of the authors), which means just taking the common module (FFT) from ANNs to SNNs without detailed analysis. Regarding the novelty of SpikF, we provide further descriptions. FFT is used in many models to obtain the frequency spectrum, but the processing methods of the frequency spectrum are different: - **FITS** [3] uses a complex linear layer to interpolate the frequency spectrum, capturing both amplitude and phase information of the time-series. - **FreTS** [4] utilizes frequency domain MLPs to process the frequency spectrum, achieving a global view and energy compaction. - **FEDformer** [5] generates sparse attention via dropping components of the frequency spectrum, reducing computational complexity and capturing detailed temporal structures. - **FilterNet** [6] adapts signal processing filters to weaken or strengthen specific frequency spectrum components, thus removing high frequency noise. These methods apply FFT to the entire series, emphasizing high-frequency components. In contrast, SpikF employs patch and grouping mechanisms to enhance low and middle-frequency spectrum utilization, emphasizing local information through spiking patches. This approach enables full-spectrum utilization, which is not explored by previous research. Furthermore, SpikF represents a novel integration of frequency-domain analysis with SNNs. We hope this difference from previous FFT-based methods highlights the novelty of our work and addresses your concerns regarding the adoption of FFT from ANN research. [3] Zhijian Xu et al. FITS: Modeling Time Series with 10k Parameters. ICLR 2024. [4] Kun Yi et al. Frequency-domain MLPs are More Effective Learners in Time Series Forecasting. NeurIPS 2023. [5] Tian et al. FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. ICML 2022. [6] Kun Yi et al. FilterNet: Harnessing Frequency Filters for Time Series Forecasting. NeurIPS 2024.
Summary: This paper introduces an attention-free framework, called Spiking Fourier Network (SpikF) for achieving long-term time series forecasting. ## update after rebuttal There is no external comment. And I think that it is a borderline paper. Claims And Evidence: This paper aims at modifying SNNs and attention for long-term time series forecasting. The authors conduct experiments to verify SpikF in time-series prediction tasks across multiple dimensions. These empirical investigations are comprehensive, and the results look convincing. Methods And Evaluation Criteria: The core of this paper is to replace attention by a Fourier-based methods. The workflow is introduced clearly by Section 3.4 and Figure 1. I care about whether and with what computational complexity the fast Fourier transform in Eq. (9) support large-scale data processing? Theoretical Claims: na Experimental Designs Or Analyses: These empirical investigations are comprehensive, and the results look convincing. Supplementary Material: na Relation To Broader Scientific Literature: na Essential References Not Discussed: na Other Strengths And Weaknesses: nothing. Other Comments Or Suggestions: Overall, I believe that this paper is a borderline paper. The authors provide comprehensive experiments, and the results look convincing. However, I still care about the complexity and practicability of Fourier-related approaches in large-scale time series forecasting. If this concerns are fixed in Rebuttal, I would consider raising my score. Questions For Authors: nothing. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough evaluation of our work and the expert feedback you have shared. In response to your constructive critique, we will provide clarifications addressing your question and incorporate these discussions into the revised manuscript to strengthen our theoretical framework. >**Q1:** However, I still care about the complexity and practicability of Fourier-related approaches in large-scale time series forecasting. We would like to clarify the efficiency of SpikF when facing large-scale data: 1. **Theoretical Analysis**: Theoretically, the computational complexity of Fast Fourier Transform is $O(NlogN)$, outperforming conventional approaches like linear transform ($O(N^2)$) and self-attention mechanism ($O(N^2)$). This fundamental advantage makes FFT-based solutions inherently scalable for long-horizon forecasting tasks. 2. **Algorithmic Optimization**: Furthermore, FFT, as an algorithm with a long history, has been optimized by former researchers. For example, parallel distribution computation [1] and memory-sharing strategy [2] have further improved the efficiency of FFT when facing large-scale data. And these optimization can also be easily adapted into our architecture when facing real-world applications. 3. **Experimental Validation**: As demonstrated in Figure 5, our method witnesses a precision promotion of $9.0\\%$ in terms of MSE when scaling look-back window from $48$ to $720$ timesteps, empirically indicating the effective utiliztion of larger-scale time-series. We provide the results of Figure 5 here, where LW represents look-back window, and PL denotes prediction length. | LW \ PL | 96 | 192 | 336 | 720 | |------|------|------|------|------| | 48 | 0.301 | 0.384 | 0.434 | 0.442 | | 96 | 0.290 | 0.368 | 0.412 | 0.420 | | 192 | 0.296 | 0.363 |0.386 | 0.415 | | 336 | 0.287 | 0.351 | 0.371 | 0.395 | | 720 | 0.290 | 0.351 | 0.368 | 0.410 | 4. **Event-driven Paradigm**: When facing real-world large-scale time-series, SpikF can leverage temporal sparsity in online data streams [3], achieving computational efficiency through event-driven FFT triggering. This makes our approach particularly suitable for edge computing scenarios with limited resources when facing large-scale data. In a word, Fourier-based approaches are more efficient than MLP-based or transformer-based algorithms for large-scale time series forecasting. [1] Yang, C. et al. A Parallel Fast Fourier Transform Algorithm for Large-Scale Signal Data Using Apache Spark in Cloud. ICA3PP 2018. vol 11336. [2] Eleftheriadis, Charalampos et al. Energy-Efficient Fast Fourier Transform for Real-Valued Applications. IEEE Transactions on Circuits and Systems II: Express Briefs. 69. [3] Jesus L. Lobo et al, Spiking Neural Networks and online learning: An overview and perspectives, Neural Networks, Volume 121, 2020, Pages 88-100. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal and other reviewers' comments, I still believe that this paper is a borderline paper. Strengths: The authors provide comprehensive experiments, and the results look convincing. Weakness: About the complexity and practicability of Fourier-related approaches in large-scale time series forecasting. In my view, the FFT is suitable to energy-efficient computations, but not adopt by large-scale time series forecasting. Besides, what does $N$ mean in Theoretical Analysis? There should be two indexes that includes the temporal and spatial dimensions --- Reply to Comment 1.1.1: Comment: Thank you for your comments regarding the complexity and practicality of FFT-based methods. We would like to provide further discussion on these points. >**Q1:** Besides, what does $N$ mean in Theoretical Analysis? There should be two indexes that includes the temporal and spatial dimensions. $N$ represents the length of the input time-series. Since we are focusing on large-scale time-series forecasting, which typically involves processing time-series with long input sequences, we have omitted the spatial dimension of the time-series data for simplicity. If we denote the number of spatial channels as $D$, then the computational complexity of FFT, linear transform, and self-attention mechanism are $O(DNlogN)$, $O(DN^2)$ and $O(DN^2)$ respectively. >**Q2:** In my view, the FFT is suitable to energy-efficient computations, but not adopt by large-scale time series forecasting. In our previous response, we have discussed the theoretical computational complexity of FFT: >Theoretically, the computational complexity of Fast Fourier Transform is $O(NlogN)$, outperforming conventional approaches like linear transform ($O(N^2)$) and self-attention mechanism ($O(N^2)$). This fundamental advantage makes FFT-based solutions inherently scalable for long-horizon forecasting tasks. We have also highlighted optimizations of FFT specifically designed for large-scale data processing: >Furthermore, FFT, as an algorithm with a long history, has been optimized by former researchers. For example, parallel distribution computation [1] and memory-sharing strategy [2] have further improved the efficiency of FFT when facing large-scale data. And these optimization can also be easily adapted into our architecture when facing real-world applications. Moreover, our experimental results have shown the precision of SpikF in handling large-scale time-series data: >As demonstrated in Figure 5, our method witnesses a precision promotion of $9.0\\%$ in terms of MSE when scaling look-back window from $48$ to $720$ timesteps, empirically indicating the effective utilization of larger-scale time-series. We provide the results of Figure 5 here, where LW represents look-back window, and PL denotes prediction length. | LW \ PL | 96 | 192 | 336 | 720 | |------|------|------|------|------| | 48 | 0.301 | 0.384 | 0.434 | 0.442 | | 96 | 0.290 | 0.368 | 0.412 | 0.420 | | 192 | 0.296 | 0.363 |0.386 | 0.415 | | 336 | 0.287 | 0.351 | 0.371 | 0.395 | | 720 | 0.290 | 0.351 | 0.368 | 0.410 | To further illustrate the application of FFT in large-scale time series methods, we would like to mention the following approaches: By separating low-frequency components from high-frequency components, the original time-series can be decomposed into trend and seasonality subseries [3], thereby enabling distinct feature utilization for trend and seasonality. By selecting the top $k$ frequency components in the frequency spectrum, large-scale time-series data can be organized into a series of 2D time-series with different scales [4], capturing patterns across both temporal and frequency domains. Regarding SpikF, we initially employ a patch mechanism to enhance the utilization of large-scale time-series data [5]. Subsequently, grouped FFT is applied to the spiking patches, facilitating full-spectrum utilization of extensive time-series data. Furthermore, according to the convolution theorem [6], the selection mechanism we utilize after FFT is equivalent to the convolution operation in the time domain. This implies that **sparse operations in the frequency domain can lead to dense operations in the time domain**. When combined with the inherent sparsity of spikes, this property can reduce computational complexity. In a word, FFT has been validated by previous research for processing large-scale time-series data. It not only uncovers general characteristics of the large-scale time-series but also transforms the structure of the data, thereby enhancing the subsequent feature extraction process. Furthermore, FFT has computational advantages, particularly when incorporated with parallel computation or SNN architectures. [1] Yang, C. et al. A Parallel Fast Fourier Transform Algorithm for Large-Scale Signal Data Using Apache Spark in Cloud. ICA3PP 2018. vol 11336. [2] Eleftheriadis, Charalampos et al. Energy-Efficient Fast Fourier Transform for Real-Valued Applications. IEEE Transactions on Circuits and Systems II: Express Briefs. 69. [3] H. Musbah et al. Identifying Seasonality in Time Series by Applying Fast Fourier Transform, IEEE EPEC 2019, pp. 1-4. [4] Haixu Wu et al. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. ICLR 2023. [5] Yuqi Nie et al. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. ICLR 2023. [6] C. A. Blackwell et al. The Convolution Theorem in Modern Analysis. IEEE Transactions on Education, vol. 9, no. 1, pp. 29-32.
Summary: This work addresses two key challenges in applying Spiking Neural Networks (SNNs) and Transformer architectures to long-term forecasting: (1) capturing long-range dependencies, which increases computational and energy costs, and (2) the lack of effective positional encoding for Spiking Transformers. To address these issues, the authors propose SpikF, a novel architecture that introduces a spiking version of Patch Embedding to divide inputs into patches and replaces self-attention with a Spiking Frequency Selection (SFS) mechanism to model dependencies. Experiments show that SpikF reduces error by 1.9% compared to state-of-the-art models. Additionally, the authors analyze energy efficiency by comparing Synaptic Operations (SOPs) in SpikF with the FLOPs of conventional models, demonstrating its superior energy efficiency. According to the authors, SpikF is the first SNN-based benchmark providing comprehensive evaluation across long-term forecasting datasets. Claims And Evidence: The authors make two main claims: Performance Improvement: SpikF achieves a lower Mean Absolute Error (MAE) across multiple long-term time-series benchmark datasets. The claim is well-supported. Experimental results in Table 1 demonstrate clear improvements over baselines like PatchTST, iTransformer, and FEDformer. Energy Efficiency: SpikF is significantly more energy-efficient (75.05% lower energy consumption) than existing ANN-based models. The claim is not well-supported. - Figure 2 lacks clarity regarding the meaning of the firing rate (α) in terms of the number of spikes per unit time. - The values of α used in the experiments (Table 1) are unspecified, and it is unclear how performance depends on α. - Table 2 and Figure 3 likely underestimate energy consumption. Since the LIF neuron dynamics are part of the computation, directly comparing SOPs and FLOPs may not be a fair measure of energy efficiency. Methods And Evaluation Criteria: The performance evaluation is convincing, with comparisons to multiple baselines and ablation studies. However, as previously stated, the impact of the firing rate (α) on performance is unclear. An additional analysis is necessary. The energy efficiency estimation (Appendix C.1) is inadequate. A more formal methodology is required. I suggest revising the energy analysis with a more rigorous approach. Theoretical Claims: No, but the claims seem correct. Experimental Designs Or Analyses: The experimental setup in Table 1 and Table 3 appears reasonable. Supplementary Material: A.1, B: Reviewed, no major concerns. C: Reviewed, comments provided in relevant sections. Relation To Broader Scientific Literature: The paper is well-positioned in spiking neural networks, time-series forecasting, and energy-efficient AI. Transformer-based forecasting models (Autoformer, FEDformer, PatchTST, iTransformer) are properly cited. However, the role of frequency-based methods in time-series forecasting is not well-discussed. More references to prior work in this area would strengthen the positioning. Essential References Not Discussed: NA Other Strengths And Weaknesses: - Strengths Novelty: The Spiking Fourier approach is an interesting and innovative alternative to traditional SNN architectures. Impact: Addresses real-world concerns in energy-efficient AI. Clarity: The paper is generally well-written, with clear motivation and method descriptions. - Weaknesses Energy efficiency is questionable: The paper does not convincingly justify how SpikF is more energy-efficient than ANN-based models. On a more general note, It is unclear whether Transformers with a sparsely active layer are a proper avenue for truly energy-efficient models. Unclear limitations of the patch encoder: The impact of using patches is not discussed. The implementation details of the patch encoding layer are unclear. Other minor points: - Section 2.2: The variable U is undefined, and it is unclear whether the membrane voltage has a reset mechanism. - Section 3.4: The SFS module is not well described. - Section 3.5: The motivation for the architecture choice is unclear, I would suggest to add more motivations. Other Comments Or Suggestions: - Clearly define α and analyze its impact on performance. - Provide a more rigorous energy consumption analysis, ensuring a fair comparison between SOPs and FLOPs. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your thorough review and insightful feedback. We will address your questions one by one in the following sections. >**Q1:** Figure 2 lacks clarity regarding the meaning of the firing rate ($\alpha$) ... $\alpha$ denotes the firing rate of SNN and can be formulated by: $$ \alpha = \frac{\sum_{k=1}^{T_s}s_k}{T_sn} $$ where $s_k$ represents the number of spikes released at time step $k$, and $n$ denotes the number of neurons. >**Q2:** The values of $\alpha$ used in the experiments (Table 1) are unspecified, and it is unclear how performance depends on $\alpha$. The values of $\alpha$ can be found in [Table 1](https://anonymous.4open.science/r/58FD/). We vary the choice of $T_s$ to measure the impact of $\alpha$ which is commonly used by previous works [1], and find that when $\alpha\approx0.2$ SpikF achieves the best performance ([Table 2](https://anonymous.4open.science/r/58FD/)). [1] Yao et al., “Attention Spiking Neural Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, pp. 9393-9410, 2023. >**Q3:** However, the role of frequency-based methods in time-series forecasting is not well-discussed. We extend our discussion about frequency-based methods. **FITS**, **FreTS**, **FEDformer** and **FilterNet** employ a complex linear layer, MLPs, sparse attention and filters respectively to enhance the utilization of high-frequency information. In contrast, SpikF enhances high-frequency focus via patching and grouping while using FFT for low-frequency analysis, achieving comprehensive spectrum efficiency. >**Q4:** It is unclear whether Transformers with a sparsely active layer are a proper avenue for truly energy-efficient models. We compare SpikF (1.08M) with FEDformer (20.68M), which is a sparse transformer with frequency domain analysis. SpikF outperforms FEDformer with a $19.15\times$ smaller model size and $15.4\\%$ better accuracy, demonstrating SNN’s superiority in energy-efficient forecasting ([Table 3](https://anonymous.4open.science/r/58FD/)). >**Q5:** The impact of using patches is not discussed. The use of patches enhances the utilization of local information and long history information. The experimental results show that patch mechanism reduces MSE by $2.6\\%$ ([Table 4](https://anonymous.4open.science/r/58FD/)). >**Q6:** The implementation details of the patch encoding layer are unclear. We modify some of the equations and narratives to better describe the pipeline of the Spiking Pacth Encoder (SPE): The input sequence $x^{1:L}$ is first divided into patches: $$p^k=x^{\frac{L}{N}(k-1)+1:\frac{L}{N}k}$$ where $N$ is the number of patches. Then each patch is processed by a spiking linear layer: $$S_{enc}^{T_s(k-1)+1:T_sk}=\mathcal{SN}(\text{BN}(\text{LN}(p^k)))$$ The SPE serves the role of utilizing local information and transforming continuous time-series into binary spikes. >**Q7:** Section 2.2: The variable $U$ is undefined, and it is unclear whether the membrane voltage has a reset mechanism. The membrane potential will be reset to the resting potential once it reaches the threshold. And $U[t]$ represents the membrane potential according to the following formula: $$ V[t]=\begin{cases} U[t],\ \text{if}\ U[t]<V_{th}\\\\ V_{rest},\ \text{if}\ U[t]\geq V_{th} \end{cases} $$ >**Q8:** Section 3.4: The SFS module is not well described. The spiking patches generated by the Spiking Patch Encoder are first grouped as $G^i$: $$ \mathbf{G}^i=\{S_{enc}^i,S_{enc}^{i+g},\dots,S_{enc}^{i+(\frac{N}{g}-1)g}\} $$ where $g$ denotes the number of groups. A spiking max pooling layer is used to emphasize the mutual key frequencies of different groups: $$\mathbf{M} _{sel} = \text{SMP}(\mathcal{M} _{sel}^1, \mathcal{M} _{sel}^2, \dots, \mathcal{M} _{sel}^g)$$ The SFS module is responsible for selecting selecting key frequency components from the spiking patches. >**Q9:** Section 3.5: The motivation for the architecture choice is unclear, I would suggest to add more motivations. Following your suggestion, we refine our motivations in Section 3.5: Grouped feature utilization preserves time dynamics of SNN by avoiding information loss caused by averaging [2], reducing computational complexity of the spiking decoder. [2] Zhou et al. Spikformer: When Spiking Neural Network Meets Transformer, ICLR 2023. >**Q10:** Table 2 and Figure 3 likely underestimate energy consumption. Our method estimate energy consumption based on SOPs and FLOPs. To provide a more comprehensive analysis of energy consumption including ACE, $E_{Mem}$,$E_{Opts}$,$E_{Addr}$ and $E_{Total}$ according to [3-4] in [Table 5](https://anonymous.4open.science/r/58FD/). [3] Zhang et al. Sparse transformer with local and seasonal adaptation for multivariate time series forecasting. Sci Rep. [4] Tian et al. FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. ICML 2022. --- Rebuttal Comment 1.1: Comment: I appreciate the answers, and think that they improve the paper. However, but I do not think that the paper is at the level of a strong accept (partially due to my limited knowledge), so I will keep my score (4: Accept).
Summary: The authors propose **SpikF**, a novel SNN-based architecture designed for long-term prediction tasks. Technically, the **spiking patch encoder** is introduced to efficiently convert sub-series into spikes with low computational complexity. Additionally, a **spiking frequency selection mechanism** is implemented to identify and retain core components, thereby enhancing overall performance. Experimental results demonstrate that SpikF outperforms state-of-the-art (SOTA) methods across eight long-term prediction tasks and exhibits exceptional suitability for deployment on edge devices. This work highlights the potential of SNNs in handling complex temporal tasks while maintaining computational efficiency, making it a promising solution for real-world applications. Claims And Evidence: Yes, the claims are very clear. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I have checked the correctness of all proofs. Experimental Designs Or Analyses: The effectiveness of the proposed method has been extensively validated through numerous experiments. Supplementary Material: The supplementary material is provided after the manuscript. Besides, the authors also provide the code in the appendix. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. SNNs for long-term time-series prediction is very novel and interesting topic. 2. The writing is very easy to understand and straightforward. 3. Eight real-world long-term benchmark datasets are conducted to verify the effectiveness of the proposed SpikF. Weaknesses: 1. Energy Efficiency Analysis. While the authors reference previous methods to analyze the energy efficiency of SNN algorithms, the use of AC (Accumulate) or MAC (Multiply-Accumulate) operations for power consumption calculations may not be entirely convincing for SNNs. Given the critical importance of this metric, the rationale behind this approach should be thoroughly justified. The authors are encouraged to provide their insights or at least discuss this limitation in detail, as it significantly impacts the credibility of the energy efficiency claims. A more tailored energy analysis specific to SNNs, such as spike-based operations, would strengthen the paper's contributions. 2. Lack of Inference Time. Although the authors compare computational complexity across different methods in Fig. 2, they should also provide a comparison of inference times. Since both contributions of the paper (i.e., the spiking patch encoder and the spiking frequency selection mechanism) are directly related to algorithmic inference efficiency, a detailed analysis and discussion of inference time comparisons would greatly enhance the practical relevance of the work. 3. Improvements in Writing. The writing and presentation of the paper could be further refined. For example, there is significant blank space around Equation (13), and the font sizes in the figures are inconsistent. It is recommended that the authors optimize the manuscript's layout and formatting in future revisions to improve readability and professionalism. Additionally, ensuring consistent font sizes and minimizing unnecessary blank spaces would enhance the overall quality of the paper. Other Comments Or Suggestions: No Questions For Authors: Please see the weaknesses and respond to each comment. Besides, two questions are listed below: Why are SNNs more advantageous than ANNs in temporal prediction tasks? While the power efficiency advantage is clear, could the authors provide some examples of future edge device applications? Based on the above comments, I am currently leaning toward a weak reject, or perhaps borderline. However, I will also consider the other reviewers' opinions and the authors' responses. If the authors provide satisfactory answers, I will likely raise my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your careful review and the insightful questions you have raised. We provide a detailed point-by-point response to each of your valuable comments to ensure clarity and address all aspects thoroughly. >**Q1:** Lack of Inference Time. ... a detailed analysis and discussion of inference time comparisons would greatly enhance the practical relevance of the work. We totally agree that including inference time of SpikF will increase the practical relevance of our work. We utilize the data in [1] to estimate the inference time of SpikF and iTransformer, which are $2.47ms$ and $7.87ms$ respectively. This indicates that the inference time of SpikF is $3.19\times$ less than iTransformer. [1] Sumit Bam Shrestha et al., "Efficient Video and Audio Processing with Loihi 2," IEEE ICASSP 2024. >**Q2:** Improvements in Writing. ... Additionally, ensuring consistent font sizes and minimizing unnecessary blank spaces would enhance the overall quality of the paper. We have rearranged the layout and adjusted the font sizes of the figures to ensure consistency. Furthermore, we have reviewed our writing to ensure clarity of expression. The improved figures can be found at [Anonymous Link](https://anonymous.4open.science/r/CE55/). >**Q3:** Why are SNNs more advantageous than ANNs in temporal prediction tasks? The temporal dynamics of the membrane potential in SNNs enables the processing of complex time-series data while maintaining energy efficiency [2]. Specifically, SNNs have two advantages over ANNs in temporal prediction tasks [2-5]: 1. Energy efficiency. For information is transformed through binary spikes in SNNs, SNNs empirically consume less energy than their ANN counterparts, making them suitable for edge applications in resource-constrained environments [2]. Additionally, the event-driven nature of SNNs is capable of utilizing the dynamic changes in time-series data in real-world scenarios [3] [4], which further promote energy efficiency of SNNs in real-world applications. 2. The subtle dynamics of the membrane potential naturally incorporate the time dimension into the model architecture. According to equations (1) to (4), these dynamics are non-linear and can efficiently process complex time-series data, whereas such a mechanism is absent in traditional ANN architectures [5]. [2] Roy, K. et al. Towards spike-based machine intelligence with neuromorphic computing. Nature. [3] Liu, S.-C. et al. Neuromorphic sensory systems. Current Opinion in Neurobiology. [4] Vanarse, A. et al. A review of current neuromorphic approaches for vision, auditory, and olfactory sensors. Frontiers in Neuroscience. [5] Lv, C. et al. Efficient and Effective Time-Series Forecasting with Spiking Neural Networks. ICML 2024. >**Q4:** While the power efficiency advantage is clear, could the authors provide some examples of future edge device applications? Edge devices have high requirements for low power consumption to extend standby time. As SpikF has the power efficiency advantage, it is suitable for the edge applications, according to [6-8]: 1. Industrial sensors [6]. According to [6], SpikF can be deployed in industrial sensors to autonomously monitor safety accidents. 2. Wearable healthcare devices [7]. According to [7], SpikF can be used in healthcare devices to predict diseases and accelerate the diagnostic procedure. 3. Automation systems [8]. [8] shows the possibility of SpikF being adopted in automation systems. For example, SpikF can be used to process the sequential data collected by gas and pressure sensors, thus facilitating the control of robots. [6] Zhou, Y. et al. Computational event-driven vision sensors for in-sensor spiking neural networks. Nature Electronics. [7] Maji, P. et al. SNN Based Neuromorphic Computing Towards Healthcare Applications. IFIP Advances in Information and Communication Technology. [8] Jiang, X. et al. Fully Spiking Neural Network for Legged Robots. IEEE ICASSP 2025. >**Q5:** A more tailored energy analysis specific to SNNs, such as spike-based operations, would strengthen the paper's contributions. In our manuscript, we use AC and MAC [9] to evaluate ANN-based models and synaptic operations [10] for SNN-based models. Both metrics have been widely acknowledged, ensuring a fair comparison between ANNs and SNNs. Fourthermore, for a more comprehensive analysis,, we adopt methods in [11] and [12], and find that SpikF is $6.27\times$ and $3.16\times$ more efficient than iTransformer, in terms of ACE and $E_{Total}$ respectively ([Table 1](https://anonymous.4open.science/r/D401/)). [9] Zhijian Xu et al. FITS: Modeling Time Series with 10k Parameters. ICLR 2024. [10] Zhou et al. Spikformer: When Spiking Neural Network Meets Transformer, ICLR 2023. [11] Shen G, et al. Are Conventional SNNs Really Efficient? A Perspective from Network Quantization. CVPR. [12] Lemaire E, et al. An analytical estimation of spiking neural networks energy efficiency. NeurIPS.
null
null
null
null
null
null
The Synergy of LLMs & RL Unlocks Offline Learning of Generalizable Language-Conditioned Policies with Low-fidelity Data
Accept (spotlight poster)
Summary: Existing reinforcement learning (RL) approaches often struggle to generalize to unseen goals and states. To address it, this paper propose TEDUO, a training pipeline for offline language-conditioned policy learning in symbolic environments. TEDUO employs large language models (LLMs) as generalizable instruction-following agents. Experimental results demonstrate that TEDUO outperforms baselines. ## update after rebuttal I have carefully read all the reviewers' comments as well as the authors' rebuttal. Most of my concerns have been addressed. So I rise the score to 4. Claims And Evidence: The main claims of this paper are that TEDUO can: 1) Ground LLMs for Multi-Step Decision Making, and 2) Enhance Generalization and Data Efficiency. These claims are supported by the experimental results. Methods And Evaluation Criteria: The proposed method is reasonable and expected to be effective. However, in the experimental design, comparisons are only made with the non-finetuned LLM backbone and certain baselines, lacking sufficient RL-based policies as baselines. Theoretical Claims: The theoretical claims explained in Section 2 do not present any obvious issues. Experimental Designs Or Analyses: As stated in the Methods and Evaluation Criteria section, the paper lacks sufficient RL-based policies as baselines, especially prior SOTA methods on BabyAI and Webshop. Supplementary Material: I have reviewed the supplementary materials (code) and all appendices. Relation To Broader Scientific Literature: I did not find particularly relevant literature, aside from works on large language models and reinforcement learning policies. Essential References Not Discussed: The paper should introduce more RL-based policies and clearly differentiate them from the RL method used in TEDUO. Other Strengths And Weaknesses: Strength 1. This paper is well-written, with good presentation and clear motivation, making it easy to follow. 2. The paper conducts sufficient ablation studies to support its claims and demonstrate the effectiveness of the proposed method. 3. Using RL-based policies to distill LLMs is an interesting approach and may provide insights for future LLM-based policy research. Weakness 1. The paper conducts experiments on relatively simple environments such as BabyAI and Webshop, which are insufficient to demonstrate the proposed method's generalization to more challenging online environments like Habitat, RLBench, or Minecraft. Other Comments Or Suggestions: In Figure 1, the text describing the three key steps could be arranged horizontally to improve readability and help readers quickly understand the process. Questions For Authors: 1. In line 078, how are these goals obtained? Are they manually designed? 2. What is the temperature setting for the LLM? Would a high temperature setting affect the LLM's performance (either improving or degrading it)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback on our work. Below, we would like to address your questions and concerns. --- ### P1 Additional baselines. We understand that a key concern in your review is the lack of additional baseline comparisons. However, to the best of our knowledge, no existing standard RL methods operate under the exact same setting as TEDUO beyond those already included in our benchmarking. Prior work typically requires either access to an online environment, something we explicitly rule out due to our focus on offline training paradigms or relies on labeled demonstration data, whereas we work with only an unlabeled set of state-action transitions. - **New result:** As suggested by reviewer `AzTQ`, we include an additional comparison with DeepSeek-R1, not available at the time of our original submission. We have incorporated its performance into our benchmark; results under this [link](https://imgur.com/a/wxLwFgh). If the reviewer is aware of any other works that can be applied in our setting, we would be eager to include them in our comparisons to further strengthen our evaluation. We would be grateful for any suggestions! ### P2 The complexity of tasks and environments. We appreciate the reviewer’s concern regarding the complexity of our evaluation environments. BabyAI and WebShop were deliberately chosen as two distinct yet illustrative environments: a grid-based world and a text-based web interaction task. This illustrates TEDUO’s adaptability across symbolic environments of different kinds. Additionally, as detailed in Appendix C.1, WebShop presents significant challenges, such as dynamic UI-based actions and linguistic action requirements, making it a non-trivial testbed. TEDUO's modularity makes it readily generalizable to other symbolic environments, as outlined in Appendix C.2 (*Guide for Practitioners*). While environments such as RLBench habitat and Minecraft are indeed complex, they are also image-based, which diverges from TEDUO’s focus on symbolic, text-based tasks. As acknowledged in our limitations section, continuous control tasks like RLBench may not be best suited for LLM-based agents. However, we recognize the potential for extending TEDUO’s applicability via, e.g., VLMs. We leave such extensions for future work. While scaling to more complex settings is an important goal, it is a common practice in RL research to focus on simple environments for a detailed and interpretable analysis. Notably, many works recently published at top-tier venues still focus on toy environments. For instance, this includes the relevant related work [1] cited by reviewer **azTQ,** which uses a card game as their primary environment for experimentation, and the related works cited by reviewer **vbjR** [1,3,4], which use environments much simpler than Webshop. Our focus on these settings allowed us to better understand TEDUO’s core mechanisms, providing a foundation for future extensions to more complex settings. Finally, while BabyAI and WebShop may appear toyish, we respectfully argue that the significance of our results goes beyond the complexity of these benchmarks. To the best of our knowledge, TEDUO is the first method capable of learning generalizable, language-conditioned policies from fully unlabeled data, which we find a significant and exciting result. The presented results demonstrate the promise of combining LLMs with RL-based approaches to enable flexible generalization. - **New Results:** To deepen the argument about flexible generalization, we have performed a new experiment demonstrating TEDUO’s generalization to *larger, unseen BabyAI grids* after training on smaller grids. Performance remains robust for grids three times larger than the grids seen during training. Results are available under this link : [link](https://imgur.com/a/kjgK47J). ### P4 How are the goals in $\mathcal{G}^{tr}$ obtained? The goals are manually designed (see Appendix C.8.1 and C.8.2, paragraphs “Goals” for details). ### P5 What is the temperature setting for the LLM? We used the default temperature of the Llama-8B-Instruct model: 0.7 (both for TEDUO-fine-tuned and baselines). Motivated by your suggestion, we ran an additional small-scale experiment using online evaluation. Results, available at this [link](https://imgur.com/a/gxPcSbQ), show a slight improvement for temperature=1.2. We interpret this as a higher temperature promoting more diverse actions, helping the agent not to get stuck in suboptimal behaviors. Please note, tuning of the temperature is feasible when simulators are available. In offline RL scenarios, model selection is complicated, and hyperparameter tuning may not be possible. --- We sincerely appreciate the reviewer’s time and thoughtful feedback. We hope that our clarifications and new results address your concerns and strengthen the contributions of our work. Please let us know if there are any additional points that would benefit from further elaboration! --- Rebuttal Comment 1.1: Comment: Thank you for providing additional experimental results and clarifications. Most of my concerns have been addressed. If the remaining concern mentioned below can also be resolved, I will consider increasing my score. ## Q1: Additional baselines The reviewer understands that identifying a suitable offline RL method for training with unlabeled data is a challenging task. However, if the first step of TEDUO is viewed as utilizing an LLM to label trajectories, then many existing offline RL methods could serve as reasonable baselines—assuming they are all trained on the data obtained from this initial step of TEDUO. Including more baseline comparisons could help make the contribution of this work more solid and convincing. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We are glad that our previous response addressed most of your initial concerns. We agree that once the first step of TEDUO is completed, it is in principle possible to apply classical offline RL methods. However, our setting introduces two key complexities: (1) dealing with natural language goals, and (2) generalizing across a diverse set of tasks, as required in goal-conditioned reinforcement learning (GCRL). These aspects make standard offline RL methods less directly applicable. In particular, scaling value estimation in the presence of multiple goal-conditioned reward functions remains an open challenge. As an example, recent efforts to adapt Implicit Q-learning (IQL) to GCRL (excluding the natural language aspect) have succeeded only in maze navigation [1]. To our knowledge, IQL has not been successfully extended to natural language goal-conditioned RL, likely due to the scalability issues mentioned above. We think it is an exciting and valuable direction for future research. In our work, we searched extensively for applicable offline RL baselines for the BabyAI environment and found that the only viable option remains imitation learning using an LSTM+CNN policy, as used in our experiments. This is consistent with recent literature, where this approach is also the sole baseline adopted in comparable settings [2,3,4]. If the reviewer is aware of other relevant offline RL baselines applicable to training natural-language goal-conditioned policies in BabyAI, we would be happy to include and evaluate them in a revised version. We hope this answers your last concern, and we are happy to discuss further if needed. [1] (2024). Navigation with QPHIL: Quantizing Planner for Hierarchical Implicit Q-Learning [2] (Neurips 2022). Pre-Trained Language Models for Interactive Decision-Making [3] (RL workshop ICLR 2022). Zero-Shot Compositional Policy Learning via Language Grounding [4] (RL workshop ICLR 2023). Unified Policy for Interleaving Language Reasoning with Actions
Summary: This paper introduces TEDUO, a method for fine-tuning instruction-following LLM agents using an unlabeled dataset of interactions (i.e. environment transitions without instructions or rewards). TEDUO operates in two key stages: 1. The LLM labels the dataset by determining whether any possible goals are reached in each transition. 2. Tabular Offline Reinforcement Learning (RL) is then applied to learn one policy per goal. These policies generate a new expert demonstration dataset, which is used to fine-tune the LLM via imitation learning. Empirical results on BabyAI with Llama-3-8B demonstrate strong generalization and impressive performance. ## update after rebuttal As mentioned in my rebuttal comment, I deeply appreciate the effort put into this rebuttal, including the additional experiments and several updates to the manuscript. While I believe some empirical results—such as applying Imitation Learning directly to the demonstrations—might still be useful (even though the authors’ explanation for why this would lead to poor results is reasonable), the authors have addressed most of my concerns. I am therefore now recommending acceptance of this paper. Claims And Evidence: One important contribution of the paper is proposing an approach to leverage an unlabelled dataset of demonstrations. This approach leverages an LLM to label data given a known set of goals. First, several prior works proposed similar approaches, such as MineDojo (Fan et al., 2022) or LMA3 (Colas et al., 2023). Then, none of the experiments really leverages a true dataset of unlabelled demonstrations as demonstrations were collected using goal-conditioned policies. In comparison, MineDojo did use unlabelled recordings of humans. Finally, as the true labels are known, several natural questions arise e.g. What is the accuracy of the LLM as goal labeller? Did the goal labeller also found intermediate goals achieved? However, no analysis is provided on this key part of the method. Appendix B.5 indicates the performance of the lightweight neural networks trained to reproduce the LLM's labelling, but the accuracy seems to use the LLM's labels as ground truth. The authors also argue that TEDUO is very robust to the quality of demonstrations. Appendix B.2 proposes an analysis of this, and I am really struggling to understand the results. In particular, the authors seem to say that collecting demonstrations with a random policy leads to better results than using an optimal policy for some goals. This appears surprising, and I do not understand how to relate the metric studied (the "number of unique initial states for which a goal is reachable") to TEDUO's performance. Additionally, even though these results seem to show that using an optimal goal-conditioned policy to collect demonstrations is not optimal for TEDUO, the authors still used it in the experiments in the main paper. Finally, the authors argue that they study, in comparison to prior works, generalization on "semantically distinct" goals. While they do not define what they mean by "semantically distinct", Table B.5 hints that all types of goals were in the training set for their experiments on BabyAI, meaning that what differs between the training and test set is the name of the objects that agent must interact with (which is not different from what prior works did). Moreover, the authors argue that prior work mostly studied generalization on synonymous instructions, but they also introduce such goals by asking GPT-4 to paraphrase the original commands from BabyAI. Methods And Evaluation Criteria: The baselines chosen to compare TEDUO against, as well as the ablations with TEDUO + BabyAI-IL-bot (instead of the LLM) and the ones in Section 5.2, appear well chosen to me and very insightful. I believe other key ablations would be very important. For instance, what would the performance of the LLM directly finetuned with Imitation Learning be on the demonstrations (which should be of much lower quality than the ones collected with the policies trained with Offline RL)? What would be the performance of directly applying Offline RL to the LLM based on the demonstrations instead of first learning intermediate policies? A method such as ILQL (Snell et al., 2023) could be used here. Little is said about how the policies trained with Offline RL are used to produce the dataset for Imitation Learning. Section 3.3 seems to indicate that no interaction with the environment is performed and that an "empirical transition function" is used. More explanations on this part would be more than helpful. I also have concerns regarding the experiments with WebShop. In particular, Appendix C.6.2 indicates that the Offline RL part of TEDUO was replaced by "filtered Behavioral Cloning." This raises a question about whether Offline RL is really necessary for TEDUO and also questions the choice of the WebShop if parts of the method are not suited for it. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Results from Section 5.2 indicate that the policies learning with Offline RL perform poorly. Yet, using them to collect demonstrations for the LLM to perform Imitation Learning leads to impressive results. This seems quite surprising. One could say that, even though the trajectories generated by the policies are of poor quality, it is sufficient for the LLM to adapt its strategy successfully. But even more surprisingly, using these demonstrations to perform Imitation Learning with BabyAI-IL-bot also leads to good results (on the training environment/goals at least). Could the authors comment on this? Supplementary Material: I reviewed the appendices. The extended related work properly covers the literature related to this work, in my opinion. Insightful details about the method and the experiments are also provided. Appendix B.4 could benefit from more discussion and comparisons between tabular offline Q-learning and DQN. Relation To Broader Scientific Literature: As explained before, I believe the contribution of hindsight relabelling is overstated, given how close it is to prior works. Apart from this, the literature is well-covered, including techniques similar but not applied in this paper, such as Inverse RL. Essential References Not Discussed: I do not see any essential references that were not discussed. Other Strengths And Weaknesses: I appreciated the authors performed multiple ablations and generalization experiments. Other Comments Or Suggestions: The state abstraction part is said to be optional, yet it is the first subsection of the method section and abstract states are often referenced. I think this could be much clearer by either not mentioning abstract states too much in the main paper or always using abstract states (and not saying this part is optional) and putting the ablation on their usefulness in the main paper. Their are a few minor typos I spotted: - l.223: "We require a controlled with environment" - l.881: "an import ant metric" Questions For Authors: I do not have any further questions. Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback and valuable suggestions. Below, we address each of the key concerns and clarify aspects of our work. --- ### P1 Hindsight Labelling We acknowledge that LLM-based hindsight labeling itself is not novel, and we did cite prior work (Appendix A.2). The reviewer’s suggested references (MineDojo, LMA3) will also be added. However, these operate online, while TEDUO is fully offline. Our contribution also lies in scaling LLM-based reward functions by approximating them with NNs to reduce LLM calls. **Action:** We will refine our contribution statement to avoid overclaiming and emphasize these specific contributions. ### P2 LLM as a Goal Labeler We clarify that the results presented in B.5 report the accuracy of the rewards predicted by the NN reward approximators **with respect to the environment ground-truth rewards.** **Action:** We have additionally included the accuracy, precision, and recall metrics comparing LLM reward labeling to ground truth rewards. See this [link](https://imgur.com/a/Q2fixHr) for results. ### P3 Appendix B.2 We acknowledge the reviewer's difficulty in interpreting Appdx. B.2, and we agree that the definition of $|\mathcal{S}_0^g|$ was unclear. **Action:** We have provided a more detailed definition and the motivation behind this metric. The revisions can be found under this link: [link](https://imgur.com/a/8pGgMIx). Thank you for drawing our attention to this part! ### P4 Experimental designs **Unlabeled Demonstrations** The claim that "none of the experiments leverage a true dataset of unlabeled demonstrations" is incorrect. We always act as if the data collection policy were completely unknown. Even if the policy used is a mixture of goal-conditioned policies, we do not assume knowledge of which trajectory has been generated with respect to which goal. **Semantically Distinct Goals** We define two "semantically distinct" instructions as two goals with different goal states, rather than paraphrased versions of the same task. (E.g., ”Pick up a red box” is distinct from “Pick up a yellow key” and is semantically equivalent to “Collect a red container”). While prior works have mostly focused on paraphrased instructions, we specifically test whether TEDUO can generalize to goals that differ in key task attributes. While this may seem incremental, we note that conventional RL approaches have struggled with this form of generalization. Please also consult P2 in the answer to reviewer `zuF6` for **new results** on generalization to more complex tasks. **Construction of $\mathcal{D}^{SFT}$.** We confirm that no additional interaction with the environment occurs in the construction of $\mathcal{D}^{SFT}$; an empirical transition function is used. This is due to our focus on purely offline RL. ### P5 Fine-tuning with Imitation Learning. See P5 in the answer to reviewer `vbJR`. ### P6: Directly applying Offline RL We appreciate the reviewer’s suggestion of applying offline RL directly to the LLM. While promising, this poses challenges due to scale and the natural language goal-conditioned setting. ILQL is an interesting adaptation of Implicit Q-Learning, but it targets standard RL tasks with a single reward function, not GCRL, which requires learning multiple reward functions and generalizing across tasks. Scaling value estimation in GCRL remains an open challenge. Recent efforts to adapt IQL to GCRL (excluding the natural language aspect) have succeeded only in maze navigation [1]. To our knowledge, ILQL has not been extended to natural language GCRL, likely due to scalability issues. We agree this is a valuable research direction. ### P7 The necessity of Offline RL We would like to highlight that filtered BC significantly improves performance (from an average score of 5 to 50 out of 100), demonstrating its effectiveness. A detailed discussion on the necessity of offline RL can be found in P5 in the answer to reviewer vbJR. ### P8 Performance of offline RL Policies vs. Final Results We would like to clarify that the average performance of the offline RL policies is low due to evaluation on *unseen initial states,* i.e. states that have not been visited in $\mathcal{D}$. In such states, the Q-learning policies can only take random actions, leading to poor performance. For fine-tuning the LLM agent, we only use successful transitions to perform imitation learning via SFT. That is, $\mathcal{D}^{SFT}$ only includes trajectories that lead to goal achievement, which we can generate without access to online interaction with the environment by relying on the empirical state transition function and the learned reward functions. ### P9 State abstraction See P5 in the answer to reviewer `aztQ`. --- We thank the reviewer for their detailed and insightful feedback. We are confident that the revisions we plan to incorporate will further clarify and strengthen the paper and we are happy to answer any further questions you may have! --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I deeply appreciated the effort put into this rebuttal, including additional experiments and links to updated paragraphs. While I believe some empirical results such as using Imitation Learning directly on demonstrations might be useful (even though the authors' answer on why this would lead to poor results totally makes sense), the authors answered most of my concerns. I am therefore increasing my score.
Summary: The paper studies the problem of training generalizable RL policies with the help of language models. RL policies can generally achieve impressive performance given enough exploration/coverage over the (state, action) space, but if the RL policy networks are trained from scratch on a particular environment, they can perform very poorly at new environments. On the other hand, pre-trained large language models show remarkable capabilities for generalization and are becoming more and more popular at decision making tasks. This paper provides a recipe for training a language model on decision making tasks. Specifically, the paper uses LLMs to convert classical RL tasks into a text based representation and constructs solvable MDPs, next it uses tabular Q learning to solve these MDPs and learn optimal actions, and finally teaches an LLM these optimal actions for a given set of states and goals. The paper demonstrates that the resulting LLM learns generalizable strategies that can then transfer zero-shot to similar but unseen environments. # Update after rebuttal I like the paper's method and results --- and also the authors answered my concerns during the rebuttal process. **I maintain my score at 4. I think this is a strong paper, and would strongly recommend its acceptance**. Claims And Evidence: Yes, the paper makes sound claims supported by detailed experiments, at least according to my opinion. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria makes sense. Theoretical Claims: The paper has no theoretical claim of note to discuss. Experimental Designs Or Analyses: I checked the experimental design/analysis part of this paper, it made sense to me. Supplementary Material: I read the paper appendix. The supplementary material for this paper contains its codebase, which I **did not check** personally to ensure if it is correct. Relation To Broader Scientific Literature: # Language Conditioned RL Early works established that language can specify goals for RL, but they often relied on expensive data gathering or manual labeling. For an example work on using language abstraction for decision making tasks (or the general flavor of work in this area), please see [4]. # LLM grounding in real tasks With the rise of large language models, researchers began using off-the-shelf LLMs as decision-makers in interactive tasks. For example, Yao et al. (2023) proposed treating a general-purpose LLM as an agent that can choose actions by generating and evaluating plans (the ReAct framework) [2]​. Techniques like chain-of-thought prompting (CoT) and self-reflection have been shown to improve LLMs’ planning and reasoning on complex, multi-step tasks​. Despite these improvements, recent analyses (e.g. by Szot et al., 2024 [3]) found that prompting alone is insufficient for long-horizon decision-making in dynamic environments. One can use in-context learning or fine-tuning to achieve grounding in an LLM. Due to certain limitations, the authors of this paper move away from this direction, and propose the grounding via fine-tuning direction. Essential References Not Discussed: Nothing that I can think of. Other Strengths And Weaknesses: # Strengths I think the most important strength of this paper is that they show generalization across new environments --- a key feature that traditional RL policies seem to lack. Pretrained LLMs have world knowledge and can learn strategies from a few environment but then are able to dynamically adapt to similar but new environments. These remarkable generalization potential can lead to general purpose decision making agents. Though the paper demonstrates this in a very toy setup, I think it is a powerful result and should be studied more by subsequent work. # Weaknesess 1. Experiments are in very toy setup. It would be interesting to see if these results also hold in a more versatile set of environments. 2. (**Minor**) The paper title is very hard to understand and honestly, it seems to be LLM generated. Could the authors come up with a more suitable title? Other Comments Or Suggestions: None that I think of, please see my questions below and I would appreciate if the authors can address them! Questions For Authors: ## Questions about base/instruction tuned models 1. Is the paper using the pretrained models (Llama-3-8B) or the instruction tuned versions (Llama-3-8B-Instruct)? 2. If the paper is using pretrained models, any particular reason why instruction tuned versions were not used? Could the authors give a comparison with the instruction tuned model as well? 3. It is possible that the base pretrained models are just bad at following instructions and hence we see a large performance improvement in Table 1. If one fine-tuned Llama-3-8B-Instruct instead using this paper’s method, how much improvement would one observe? ## Questions about Table 2 Do the authors have any insights on why the invalid actions for Llama-3-8B are double that of invalid actions? Can it be problematic in certain situations/can it be improved in some ways. ## Question about training on these tasks directly I understand that for certain tasks, interacting with the environment or making a simulator is very difficult. But for the tasks that this paper experiments with, I imagine one could directly generate trajectories from the task and train the model on them. How would it perform compared to this paper’s complicated procedure/what benefit does this paper’s method have over that? For example, [1] (**concurrent work, so the authors need not cite it**) has similar ideas of using LLMs for general sequential decision making agents, but their pipeline seems significantly simpler compared to this paper’s method. Could the authors discuss this issue? **Overall I am excited about this paper, and happy to recommend its acceptance pending my questions above are answered satisfactorily.** # References [1] Training a Generally Curious Agent, https://arxiv.org/abs/2502.17543 [2] ReAct: Synergizing Reasoning and Acting in Language Models, https://arxiv.org/abs/2210.03629 [3] Grounding Multimodal Large Language Models in Actions, https://arxiv.org/abs/2406.07904 [4] ELLA: Exploration through Learned Language Abstraction, https://arxiv.org/abs/2103.05825 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your enthusiastic review and constructive feedback! We’re delighted by your positive assessment and have addressed your questions below to strengthen the manuscript further. --- ### P1 Task Complexity See P2 in answer to reviewer `zuF6`, showing **new results** on generalization from simple to more complex tasks. ### P2 Questions: Base vs. instruction-tuned models 1. We used **Llama-3-8B-Instruct** throughout all experiments (Appendix A.1), which we will clarify in the main text to avoid confusion. 2. Since we already use the instruction-tuned version, this question does not apply. 3. Similarly, since our experiments already use the instruct model, we think a comparison with the base model is not necessary. ### P3 Invalid action rate Pure RL policies (e.g., GCRL, tabular Q-learning policies) are designed to only take actions that have been seen in the training dataset (unless they are in a previously unseen state where the action is taken at random), minimizing invalidity. In contrast, in TEDUO, the LLM’s prior over the action space (which may contain invalid actions) is merged with the learned policies via SFT. While, on the one hand, this enables the LLM to flexibly generalize to new tasks and environments, it may occasionally lead to proposing invalid actions. As demonstrated in our experiments, the invalid action rate increases as the inference setting diverges from the training distribution; a non-fine-tuned LLM has a very high rate of invalid actions. This reflects a trade-off between generalization and memorization of the observed behaviors. When the agent is in a previously seen state, a potential workaround could integrate action masking by pruning options deemed invalid by the Q-learning policies. **Action:** We will discuss these observations in the limitations section of the revised manuscript, particularly in relation to high-stakes environments where safety is a concern. ### P4 Paper Title We appreciate the reviewer’s feedback regarding our title. While it was not LLM-generated (we promise!), we are open to suggestions for improving clarity in the camera-ready version. ### P5 Training directly on collected trajectories vs. TEDUO. We interpret the reviewer’s question as suggesting an alternative approach where the LLM is fine-tuned directly on observed trajectories. However, this approach has several limitations: Firstly, the original data collection policy used to generate the set of trajectories used for subsequent offline RL may be highly suboptimal with respect to the test time goals. In fact, it can even be random (see Appendix B.2. for an experiment with randomly collected observational data). Secondly, the generated trajectories a priori are not goal-labeled. While we could, in theory, fine fine-tune the LLM on sequences: $s_0, a_0, s_1, a_1, \ldots$, the LLM could not be made aware of which goal is the given sequence solving. This challenge is addressed by the hindsight labeling step of our pipeline. Finally, skipping the policy learning stage and fine-tuning the LLM on just the trajectories matched with the goals after step 1 is expected to perform strictly worse than fine-tuning the LLM on the Q-learning policies. This is a consequence of the suboptimal nature of the data collection policies. Importantly, this Q-learning step is also the most computationally inexpensive part of our pipeline, as it does not involve any LLM calls, so including it only marginally increases the overall compute. ## P6 Concurrent work Regarding the concurrent work cited by the reviewer [1], we find it an interesting idea, but we also highlight some key differences: - In the first part of this method, the collection policy (an LLM) is assumed to be good enough to generate a large number of trajectories with positive reward. (in our paper the step 2 (offline RL) goal is to obtain more positive trajectories based on the suboptimal offline data). - In the second part, they shift to online interaction, generating trajectories from their trained models, whereas TEDUO remains strictly offline throughout. - This method does not handle unlabelled observational data, while TEDUO explicitly addresses this by leveraging hindsight labeling to infer goals. As a result, the approach in [1] is not applicable to the data regimes considered in our work. To teach the LLM [1] uses DPO (with both positive and negative trajectories in a contrastive way) while we only teach the positive trajectories via SFT. This alternative approach is interesting and could be used in TEDUO step 3. We are happy to cite this work in the related section of the updated version of our manuscript. --- Thank you once again for your feedback. We’re thrilled by your excitement about TEDUO’s potential and hope these revisions solidify its contribution. Thank you for your support! --- Rebuttal Comment 1.1: Comment: Dear Authors, Thanks a lot for your thoughtful response! This mostly satisfies my concerns. I have just one followup question: > In the second part, they shift to online interaction, generating trajectories from their trained models, whereas TEDUO remains strictly offline throughout. Could you explain this part? I did not think [1] uses the trained model to generate more data to train it further, but I am curious if I do not understand this difference between your work and this paper correctly. Thanks! --- Reply to Comment 1.1.1: Comment: Dear reviewer, We are glad that our previous response addressed your last concerns. Regarding the use of generated trajectories during training in [1], our understanding is based on Section 3.4, *"Scalable online curriculum learning."* In this section, the authors describe evaluating their policy through online interaction in order to estimate task difficulty: *"we then uniformly sample one task from the chosen group to evaluate the model performance with C rollouts."* While [1] does aim to reduce the number of rollouts required, acknowledging the associated costs, the training process still relies on an online setting. We hope this clarifies the difference, and we are happy to discuss further if needed.
Summary: This paper introduces TEDUO, a training framework that aims to enhance language-conditioned policy learning in autonomous agents while reducing the reliance on extensive data. The framework is structured around three key stages, each leveraging the capabilities of large language models (LLMs). First, data enhancement is performed by using LLMs to abstract states and apply hindsight labeling to an unlabeled dataset of state-action transitions, resulting in the creation of labeled datasets. Next, policy learning is conducted by utilizing offline reinforcement learning (RL) algorithms to develop optimal policies tailored to a finite set of training goals. Finally, the framework emphasizes generalization, where a base LLM is fine-tuned to encode knowledge of environment dynamics and optimal actions, enabling it to adapt to unseen states and interpret novel language commands effectively. Claims And Evidence: The claims are generally well-supported. Methods And Evaluation Criteria: 1. The pipeline for data enhancement is reliant on the symbolic nature of the tasks. For example, BabyAI is a easy to define symbolic task where changing the shape or color of objects and doors would result in new tasks. Also Webshop tasks are primarily focused on browsing objects, with actions mainly limited to search [A] and buy [A]. However, this would be more difficult to enhance data for more complex environments, e.g. Webarena, Osworld. 2. Also, the method lacks scalability due to reliance on memory-intensive tabular Q-learning. It would suffer if the task is very complicated and requires many steps to finish. 3. Rewriting trajectories as data enhancement is very limited due to the extent of change you could perform when rewriting trajectories. For example, no additional exploration and trial and error are added. Theoretical Claims: No theoretical claims are involved. Experimental Designs Or Analyses: 1. More experiments on complicated tasks (with diverse goals, and larger action space) to prove the applicability of this method is necessary. Please see Methods for details. 2. Comparison to more simplified RL methods. e.g. Deepseek R1 like methods that directly use a task success verifier as reward to train the policy model. Supplementary Material: I briefly checked the appendix. Relation To Broader Scientific Literature: The method proposed by the paper is novel, yet it's conclusion is not solid due to lack of comparison on more complicated tasks and up-to-date baseline models. Also the improvement in generalization ability is limited to generalization to similar environments and lacks discussion of generalization to more challenging tasks (e.g. pick up red ball -> pick up green ball, v.s. pick up red ball -> boss level in BabyAI). Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you elaborate on how you could create abstract states and templates for a quite complicated task, e.g. webarena? Note that the website for browsing could be written as abstract state, but the next state after clicking on the website could be very different. 2. Direct RL training has been shown more effective in generalization than trajectory SFT in recent work [1]. Why is it necessary to use abstract template generated trajectories for SFT rather than directly perform RL training. [1] SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your review and constructive feedback. We appreciate your engagement with our work and have carefully addressed your concerns below: --- ### P1 Task Complexity and Scalability See P2 in the answer to reviewer `zuF6`, including **new results on generalization from simple to complex tasks**. ### P2 Scalability of Q-Learning Tabular Q-learning was used in our main experiments as it suits the BabyAI environment. However, TEDUO’s pipeline is agnostic to the choice of the learning algorithm in step 2. For larger state spaces, it can incorporate more scalable offline RL methods like DQNs, which our ablation study (Appendix B.4) shows perform comparably on BabyAI. In WebShop, we instead used filtered behavioral cloning, as Q-learning would be ineffective due to the limited number of trajectories in comparison to the dimensionality of the state space. WebShop was intentionally chosen to demonstrate TEDUO’s flexibility, reinforcing that it is agnostic to the policy-learning algorithm. ### P3 Trajectory Rewriting and Offline RL While trajectory rewriting indeed cannot introduce new exploration, development of flexible offline RL methods is critical for real-world settings where exploration is unsafe or impractical (e.g., robotics, healthcare). TEDUO’s focus on *offline* training aligns with such requirements. That said, we find the extensions of similar hybrid approaches employing LLMs as RL agents in online settings a highly promising area of research, which is beyond the scope of this work. ### P4 Additional RL baselines and comparison to DeepSeek R1 We agree that comparisons to state-of-the-art baselines are essential. However, as TEDUO focuses on *offline* language-conditioned RL, to the best of our knowledge, there are no directly comparable prior works beyond the ones already included in our benchmarking. We are open to suggestions regarding additional baselines suited for the offline language-conditioned RL setting. Regarding generalization to more complex tasks, we have now extended our evaluation with a new experiment demonstrating our method’s success (see P1). Please note, testing RL agents on more complex tasks than the ones seen at training time is non-conventional and we find the presented result very exciting. **Comparison to DeepSeek R1.** The reviewer rightly notes the success of recent works employing GRPO for *online* policy improvement in the context of LLM training. However, such methods require access to real-time experimentation to enable policy rollouts. The focus of this paper is on learning from passively collected observational datasets. We find employing ideas similar to the ones observed in Deepseek R1 a very promising direction for follow-up works. - **New result:** We included a comparison of TEDUO to the baseline of prompting Deepseek R1, see P1 in the answer to reviewer `zuF6`. ### P5 Question: State abstraction We agree with the reviewer’s suggestion to clarify the role and implementation of state abstractions. State abstraction serves two purposes: (1) transforming the states from their initial modality into text and (2) filtering all irrelevant information to achieve the goal. For a web agent like Webshop or WebArena, the first purpose is achieved by transforming the HTML code of the webpage into a curated text report, where possible actions (button/search bar, etc.) are identified. The second purpose is achieved by removing irrelevant information such as advertisements or clearly irrelevant buttons etc. In our case for Webshop, the first part is natively supported by the environment, and the second part is not needed due to the noiseless nature of the provided state. ### P6 Question: Direct (online) RL training TEDUO prioritizes *offline* RL because many real-world applications (e.g., healthcare, education) prohibit online exploration. While online RL excels in simulated settings, our framework addresses practical constraints. The cited work ([1]) focuses on online RL, which is orthogonal to our setting. [1] SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training --- Thank you for suggesting the areas for improvement, which has helped us refine and better articulate TEDUO’s contributions. Your points inspired us to: - **Push generalization further** with new results showing the promise in TEDUO’s ability to train LLM agents to generalize to new, more complex tasks. - **Clarify scalability** by emphasizing TEDUO’s compatibility with more scalable offline RL algorithms, like DQNs. - **Highlight the focus** on real-world data-constrained applications and discuss exciting areas for **future work,** extending LLM+RL hybrids to online settings. We hope you’ll find our revisions compelling and reconsider your score. Thank you for helping us in making a meaningful contribution to the field. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explanations. I’ve reviewed your added experiments on a more complicated BabyAI task. However, my main concern remains whether offline data collection on simpler tasks has the potential to generalize effectively to more complicated scenarios. While data diversity may increase, the complexity of the collected trajectories might not vary significantly, which could lead to limited generalization, which has been addressed in previous work [1]. Compositional generalization—generalizing from simpler tasks during training to more complex tasks—continues to be a key challenge, particularly in symbolic tasks. Regarding the focus on offline RL, your explanation partially addresses my concerns. It would be helpful to explicitly highlight the focus and discuss the scope of TEDUO in this context, especially in comparison to existing works in offline RL. However, given the limitations of this work’s scope to symbolic tasks, and considering that there are already more general online and offline methods (e.g., DPO), I remain inclined towards rejection. [1] Yuan, Lifan, et al. "Advancing llm reasoning generalists with preference trees." arXiv preprint arXiv:2404.02078 (2024). --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and for engaging deeply with our work. We appreciate that our previous response helped address some of your concerns, and we would like to further clarify and contextualize the remaining points you raised. **On the generalization capabilities from offline data:** We agree with the reviewer that the ability to generalize to more complex tasks is fundamentally constrained by the information content of the collected trajectories. However, we view this as a challenge inherent to offline RL rather than a weakness of our particular method. TEDUO aims to address this challenge by leveraging external knowledge from LLMs, both to label trajectories and to abstract the state space. **On the motivation for offline RL and comparison to online methods:** The use of an offline RL framework is not merely a design choice, but a necessity dictated by many real-world applications, such as finance, education, autonomous driving, and healthcare, where online data collection can be costly, unsafe, or legally constrained. Therefore, while we appreciate the comparison to online RL methods such as [1], these are not interchangeable with offline methods in such settings. Offline and online RL serve fundamentally different use cases, and direct comparison can be misleading. **On the role of DPO and relevance to TEDUO:** We understand the reviewer's suggestion that DPO may be seen as a more general alternative. While DPO is indeed used in RLHF settings to fine-tune large language models from preference data when an absolute reward function is unavailable, we would like to clarify its relevance in our context. Specifically, DPO could only be applied to Step 3 of our pipeline, where the LLM is taught to reproduce optimal policies using SFT. However, in our case, an explicit reward function is available, making the application of DPO less natural or necessary. Moreover, the work cited by the reviewer ([1]) explicitly shows that in online RL settings, DPO performs strictly worse than SFT for distilling policies. This suggests that, even within its applicable scope, DPO may not offer empirical advantages in our setting. More importantly, the core contribution of TEDUO is orthogonal to this comparison: our work investigates the generalization ability of the full training pipeline, from unlabelled data to policy generation applicable across diverse tasks. This broader focus--particularly the transformation of unlabelled data into trainable signals--is not addressed by DPO or similar methods. We hope this clarification helps position our contribution more precisely within the broader landscape of RL. **On compositional generalization and task complexity:** The reviewer raised an important point about compositional generalization. While we provide preliminary evidence that TEDUO can generalize from simpler to more complex tasks in section 5.3 and with the new experiments, we emphasize that the generalization to out-of-distribution goals is beyond the typical expectations of goal-conditioned RL, where tasks at test time are usually drawn from the same distribution as training tasks [2], including in terms of difficulty. **On clarifying our offline RL focus:** Thank you for suggesting we clarify our focus. We will revise the introduction to more clearly motivate the choice of the offline RL setting. Additionally, detailed discussions of related work in goal-conditioned offline RL can be found in Sections A.1 and A.2 of the appendix. We would be happy to incorporate any other relevant references you believe are essential. **On the symbolic environment constraint:** We acknowledge, as noted in our limitations section, that the proposed instantiation of TEDUO is restricted to environments that can be represented symbolically. However, we believe this limitation is mitigated by the broad expressiveness of natural language. - Many environments studied in autonomous agent research inherently offer textual state representations, such as web or computer environments (as proposed by the reviewer) [3,4], video games [5], and multi-turn dialogue systems [6]. - For most reinforcement learning environments, which often have tabular representations, these can be converted into key-value pairs. Values can be discretized as needed, which is a common practice in RL. - In the case of pixel-based RL, TEDUO could leverage VLMs instead of LLMs for both the data enhancement and fine-tuning stages. Alternatively, converting pixel-based environments into textual representations is an active area of research [7]. Given the above points, we believe that focusing our attention on environments representable in a text format is not a major limitation of the proposed approach. [1] Advancing llm reasoning generalists with preference trees, 2024 [2] Goal-Conditioned Reinforcement Learning: Problems and Solutions, 2022 [3] OSWorld, 2024 [4] WebArena, 2024 [5] NetHack, 2020 [6] LMRL Gym, 2023 [7] ALFWorld, 2021
null
null
null
null
null
null
Uniform Mean Estimation for Heavy-Tailed Distributions via Median-of-Means
Accept (poster)
Summary: The paper examines the Median of Means (MoM) estimator for estimating means in heavy-tailed distributions. The authors derive a new sample complexity bound using an innovative symmetrization technique. They also present applications of this technique to k-means clustering with unbounded inputs and to linear regression with general loss functions. Overall, the study introduces a novel symmetrization method to achieve these results. Claims And Evidence: Yes, the authors provided theoretical to support their claims. However, I have some comments on some problematic claims: The expressions in the main results—Theorem 3.4, Theorem 4.1, and Theorem 4.3—are not reader-friendly and can be difficult to follow. The meanings of \(\epsilon\) and \(\delta\) in the final formulas are quite ambiguous, and their relationship is not directly explained. This lack of clarity makes it challenging to understand the claims and complicates evaluation and comparison with previous works. Methods And Evaluation Criteria: 1. No benchmark datasets are provided to demonstrate the application of the methods. 2. The paper does not clearly present the proposed method; for instance, there is no algorithm table showing each step of the process. This ambiguity makes application and evaluation very difficult. Theoretical Claims: The notations in the theoretical proof are very ambiguous and contradictory, making it difficult to verify its correctness. For example, in Lemma 3.7 on page 5, the values are given as a = 4801/10000 and b = 9701/10000. However, in Lemma 3.8 on page 6, the values change to a = 4769/10000 and b = 331/10000. This inconsistency renders the proof problematic. Experimental Designs Or Analyses: This paper doesn't have experimental designs or analyses. Supplementary Material: Yes. I read the proof of some key Lemmas. Relation To Broader Scientific Literature: The ideas in this paper can be applied to k-means clustering with unbounded inputs and to linear regression with general losses, enhancing the existing approaches. Essential References Not Discussed: The literature review is extensive. Other Strengths And Weaknesses: **Other Strengths:** 1. The motivation of the paper is clear. **Other Weaknesses:** 1. The assumptions of the theoretical analysis are not clearly stated. 2. The expression MOM in Theorem 3.4 is not well-defined; a clearer expression would be more reader-friendly. Other Comments Or Suggestions: 1. Experimental analysis is preferred for readers. ============================= Post rebuttal updates: Thank you for your detailed response to my reviews. I appreciate that all of my concerns have been addressed. I will increase my overall ratings based on the discussions in the rebuttal section, provided that my concerns and suggestions are included in the next version. Questions For Authors: 1.Is it possible to include some experimental analysis on the real data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read the manuscript and provide constructive feedback. In the following, we will address the reviewer’s comments in the order they were given, except for those regarding experiments, which we will address last. **Claims and Evidence:** We first address the reviewer’s comment on the main results being quite technical and difficult to follow. We will add further explanations in the paragraph following Theorem 3.4. Specifically, we will clarify that the precision parameter $\varepsilon$ affects both $m$ (the number of samples used in each mean estimate) and the precision of the net, with the intuition that we want both the mean estimates to be $O(\varepsilon)$-close to the mean and the "rounding" in the net to have precision $O(\varepsilon)$. Additionally, we will clarify that $\delta$ influences only the number of mean estimates $\kappa$, as this parameter is used to increase the probability that more than half of the $\kappa$ mean estimates are successful - this ensures that the median of the mean estimates, which is the output of MoM, is correct. We appreciate the reviewer’s suggestion to clarify this point. **Theoretical claims:** We agree that readability would be improved by keeping the constants in Lemma 3.8 as $c=\frac{4769}{10000}$ instead of $a=\frac{4769}{10000}$ and $d=\frac{331}{1000}$ instead of $b=\frac{331}{1000}$ to mirror the constants in Lemma 3.7. We will make this correction in the next version of the paper. **Methods And Evaluation Criteria:** We thank the review for pointing out that the estimator could be better presented. We will make an pseudo-algorithm environment illustrating the procedure of the MoM. **Weaknesses:** We thank the reviewer for pointing out that some of the theoretical assumptions could be more clearly stated. For example, in Theorem 3.4, we did not explicitly state that $\mathbf{X}=(\mathbf{X}\_1,\ldots,\mathbf{X}\_{\kappa})$ is drawn from $(\mathcal{D}^{m})^{\kappa}$, so that $\mathbf{X}\_i \sim \mathcal{D}^{m}$, making it unclear what $\text{MoM}(f,\mathbf{X})$ means: $\text{median}(\mu\_{f,x\_1},\ldots,\mu\_{f,x\_{\kappa}}).$ We will correct this in the next version of the paper, along with other suggestions from the reviewer to improve clarity. **Experiments:** As also mentioned by reviewer (giju), including experiments would be interesting, but we were pleased that neither reviewer required us to do so. Conducting such experiments would be quite involved. Testing the uniform convergence property would require running the estimation method (in this case, MoM) on all possible functions in the function class and then taking the maximum error. If the function class is infinite, this is infeasible. Thus, one would need to develop a reasonable discretization that is both computationally tractable and fine-grained enough to simulate uniform convergence over the class in the heavy-tailed case. We thank again the reviewer for their feedback and hope that we have addressed their concerns. If not, please let us know, and we will do our best to address them. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response to my reviews. I appreciate that all of my concerns have been addressed. I will increase my overall ratings based on the discussions in the rebuttal section, provided that my concerns and suggestions are included in the next version.
Summary: The authors study the uniform convergence problem, i.e., estimating the mean of a set of functions simultaneously over inputs randomly sampled from an underlying distribution. Specifically, the authors focus on the classical median-of-means estimator that has celebrated performance on traditional distributional mean estimation, in particular in the heavy-tail regime. They show that median-of-means, under mild assumptions that the function class of interest can be "discretized" in some suitable sense, admits near-optimal sample complexity to achieve additive estimation error $\epsilon$ simultaneously over all functions in the class with probability $1-\delta$. To prove their main theorem, the authors employ novel analytical techniques, including a novel symmetrization-discretization-permutation pipeline on the mean estimates from each bucket. They show that (1) for uniform mean estimation to fail, there must exist a function for which median-of-means has vastly different performances on a pair of symmetric inputs, (2) the function class can be discretized via a "relaxed $\epsilon$-net" argument to allow union bounds, even if the class is unbounded, and (3) for any fixed function, it is highly unlikely that mean estimate performs vastly differently on the symmetrization introduced in (1). The authors supplement their general theoretical findings with applications to clustering and linear regression, and raises questions of finding lower bounds to the problem setting. Claims And Evidence: The claims are believable and reasonable with supporting evidences. Median-of-means enjoys many folklore properties even beyond standard setting and heavy-tailed mean estimation, so I do not find the authors' conclusion surprising; It is however very nice to see a formal analysis of median-of-means under this setting. While there are some details that can be illustrated better, the authors did also provided a relatively clear and intuitive proof to their results, and devises novel mechanisms for their analysis, which I believe should be appreciated. Methods And Evaluation Criteria: The model chosen by the author, including the problem setting and the optimization over sample complexity, is standard for mean estimation and its derived problems. While the authors make assumptions on the discretizability of the input function class, they provide evidences that the assumption is mild and reasonable. Theoretical Claims: While I haven't examined technical content in the finest details, especially their choice of universal constants, the theoretical claims are reasonable to my best knowledge, and I do not identify any major issues with their proofs and analyses. I am somewhat concerned with the universal constants in their sample complexity (Theorem 3.4), which seems to have quite large magnitude even in the standard finite-variance setting. A short discussion about the reasonableness of these constants (perhaps in comparison to the performance of empirical mean on settings for which it is shown to be optimal) will strengthen the authors' claims and arguments. Experimental Designs Or Analyses: This submission is focused on the theoretical analysis of median-of-means for uniform convergence mean estimation, and does not provide any experimental analysis. I do not believe that any experimental setup is necessary, but it would be interesting to see experiments that compare the performance of empirical mean and other celebrated mean estimators to median-of-means on uniform convergence - if an appropriate experimental setup is available. For many mean-estimation related problems, especially in more general settings, the empirical mean often outperforms other sophisticated estimators that admit rigorous theoretical guarantees; I find it interesting to ask if this is the case in uniform convergence as well, or if the heavy-tailed regime calls for more sophistication than simply taking the empirical mean. Supplementary Material: The appendices includes supplementary proofs of the technical lemmas used to prove the main theorems and applications. I have looked at the high-level proof structure of appendices A and B that is used in the proof of their main theorem, and does not have any disagreements with the authors' conclusions. Relation To Broader Scientific Literature: Median-of-means is a well-celebrated and extensively studied mean estimator that enjoys many robustness properties in many extended regimes from classical finite-variance mean estimation, some of which folklore. The authors formally analyze its performance for the uniform convergence problem under heavy-tailed regimes, which to my and the authors' knowledge is the first such formal analysis. Previous works have studied other classical estimators under more limited or incomparable models, such as empirical mean under finite dimensional constraints, or trimmed mean with adversarial contaminations and fixed sample. The authors' proposed novel techniques, including the alternative symmetrization of median-of-means and discretization of unbounded function classes, are potentially of independent interests for future work on mean estimation as well. Essential References Not Discussed: To my knowledge the authors cite and discuss essential related literature comprehensively. On a more peripheral aspect, relating to my aforementioned concern about the universal constants in Theorem 3.4: A line of work in finite-variance distributional mean estimation starting from Catoni (2012) culminates in the Lee and Valiant estimator (FOCS 2022), which achieves the optimal estimation error even up to constants. While it is unknown what the optimal constants are in heavy-tailed regimes, the Lee and Valiant estimator is believed to outperform and enjoy better constants than median-of-means as well. I wonder if there are evidences, theoretical or empirical, that rationalizes the magnitude of the universal constants the authors choose. Other Strengths And Weaknesses: The paper is overall relatively well-written and clear, but due to the technicality of their proofs and analyses, I believe some of the claims and definitions can be better motivated and intuited. As an example, in the analysis in Section 3.4 on page 5, the right column contains high-level explanations of the flow of the authors' argument in proving Theorem 3.4, with Lemma 3.6, 3.7, and 3.8, which in conjunction with Figure 1 on page 3, is very nice and appreciated, and paints a good high-level picture of the overall structure of their proof; the definitions of $\hat{\mathbf{S}}_\mathbf{b}^{(b)}(f, \epsilon)$ on the left column of page 5, however, are built upon many layers of prior definitions and are technically convoluted, without an immediately clear interpretation of what they are supposed to represent. A brief explanation (such as something along the line of "the fraction of the mean estimate with large/small deviation on different samples") may be necessarily helpful. There are also some minor typos which I outline below. Other Comments Or Suggestions: - While to my knowledge not enforced by the ICML submission style requirements, I believe it reasonable to format in-line citations such that the sentence is complete after removing the citations: for example, in Section 2, page 2, right column, instead of "...special case of the formulation given in (Oliveira & Resende, 2023) except...", use "...special case of the formulation given in Oliveira & Resende (2023) except...". - In Section 3.2, page 3, there are multiple references to "Section 2" that appears to be referencing Figure 1 instead. - Section 3.2, page 3, left column, line 160 1/2: Use `` instead of " for the left double quote. - Section 3.4, page 5, left column, line 259 1/2: It is unclear immediately what does it mean for a random vector to be symmetric; A lot of other symmetric properties are not clearly defined in the paper as well. - Section 3.4, page 5, right column, line 247 1/2: Extra comma in "...this discretization perserves, the imbalance..." Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read the manuscript carefully and provide feedback. **Theoretical claims:** We would like to address the reviewer’s comment about the magnitude of the constants in Theorem 3.4. We agree that the constants differ from the empirical mean estimate of a single function, and we provide the following explanation. The main reason for the larger constants is that we are trying to estimate multiple functions (possibly infinitely many), unlike in the case of a single function. In achieving this goal, the symmetrization, discretization, and permutation steps each introduce additional constant factors into the final bound, which leads to the larger constant. Furthermore, we did not optimize the constants in the proof of Theorem 3.4. We believe it is a good idea to add a comment about this in the next version of the paper and thank the reviewer for suggesting it. **Experiments:** We are pleased that the reviewer does not necessarily require us to include experiments in the paper, although we agree that it would be exciting to see the results of such experiments. However, conducting such experiments is quite involved. Testing for uniform convergence would require running MOM and the empirical mean estimate on all possible functions in the function class and then taking the maximum of the error. If the function class is infinite, this is not feasible. Thus, one would need to develop a reasonable discretization that is both computationally tractable and sufficiently fine-grained to simulate uniform convergence over the class in the heavy-tailed case. **Relation to broader scientific literature:** We also make a brief remark on emphasizing that we are not the first to provide uniform convergence bounds under heavy-tailed noise, as described in the manuscript in the **Related Work** section. We differ from previous work due to the focus on the *sample complexity* of the problem instead that on the estimation error. Thanks to a new analysis technique, and due to the introduction of a novel complexity measure that, differently from previous work (focussing instead on the Rademacher complexity), is log of the size of the *relaxed* version of a discretization. Our main result (Theorem 3.4) enabled us to give improved sample complexity bounds in the important cases of k-means clustering and linear regression with general losses, which is instead unclear how to derive from previous work. **Relation with Lee and Valiant 2022:** We thank the reviewer for pointing out the breakthrough work from Lee and Valiant on optimal mean estimation. As noted above, while we didn't attempt to optimize the constants in our bounds, moving from the task of estimating a single mean to that of estimating the means of each function in a possibly infinite class is likely to introduce additional (and potentially large) constant factors for any estimator. On top of that, often, the estimates of *size* of the particular function class of interest suffer from possibly large constant factors that may hide the benefits even of asymptotically optimal estimators. As a result, proving bounds with optimal constants (which, at the best of our knowledge, are still unknown) in uniform convergence, is likely to require alternatives to the symmetrization, discretization, and permutations, in addition to the adoption of more refined estimators (e.g., Lee and Valiant 2022). This challenging problem is an interesting research direction for future work, and we will mention it in the next version of the manuscript. **Strengths and weaknesses:** We thank the reviewer for highlighting that we could have presented some of the more technical details in the proofs more intuitively and accessibly. For instance as pointed out of the reviewer, the definitions of $\hat{S}\_{b}^{>}(f,\varepsilon)$ and $\hat{S}\_{b}^{\leq}(f,\varepsilon)$ could be better explained, and we will do it in the revised version. **Other Comments or Suggestions:** We also thank the reviewer for providing several suggestions for improvement, which we will incorporate into the next version of the paper. If the reviewer has any further comments please let us know, and we will do our best to address them. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Regarding my comment on constants in the "Theoretical claims" section, I meant to ask if the constants for sample mean on the uniform convergence problem is known in restricted settings, e.g., function classes with bounded interval and finite fat-shattering dimension, as outlined in the related works section. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the rebuttal. If we understand the comment correctly, the reviewer is asking for what order of magnitude constants are in uniform convergence bounds for the sample mean in well-known settings. We provide the following answer: In the canonical case of $[0,1]$-bounded functions the gap between the constants appearing in the upper and the lower bounds is large. Furthermore, the constants appearing in the upper bounds are big. In particular, for bounds based on the fat-shattering dimension of the function class, the multiplicative constant $C_1$ in the following *classical* upper bound from [P.L. Bartlett and P.M. Long 1996] $\left(\frac{C_1}{\varepsilon^2} \left(d\log^2\left(\frac{1}{\varepsilon}\right) + \log \left(\frac{1}{\delta}\right) \right) \right)$ is at least $1536$, which can be seen from the proof of Theorem 9 (5) and taking $\alpha$ close to $\varepsilon/4$. This bound has recently been improved in [E. Esposito, R. Colomboni, A. Paudice 2025] to $\left(\frac{C_2}{\varepsilon^2} \left(d + \log \left(\frac{1}{\delta}\right) \right) \right)$ where the constant $C_2$ is at least $5367c'$, where $c'$ is supposedly a large unknown constant (see point (j) page 13). Finally, we also mention the specific case of binary valued functions, where the constant $c$ appearing in the upper bound of Theorem 1 of [P.M. Long 1999] is at least $554$ (the estimate is obtained by Lemma 9 in the same paper). In terms of lower bounds, to our knowledge, the lower bound closest to the upper bound is that for uniform convergence over binary valued functions appearing in Section 28.2 (page 393) of [S. Shalev-Shwartz and S. Ben-David 2014], where the constant is at least $8$ (see the $m(\varepsilon,\delta) \geq (8d)/\varepsilon^2$ bound, line 5 in the beginning of Section 28.2). As one can see, the problem of establishing the optimal constants for uniform convergence is still an open problem, and the constants appearing in the upper bounds are in general large. We remark that especially the more complex cases, with fat-shattering dimension, which we believe we are closer to, suffer from large constants. We hope that this addresses the reviewer's comment, otherwise, we will be happy to further elaborate. **References:** [P.L. Bartlett and P.M. Long 1996]: More theorems about scale-sensitive dimensions and learning. Conference on Learning Theory (COLT). 1995. [P.M. Long 1999]: The Complexity of Learning According to Two Models of a Drifting Environment. Machine Learning. 1999. [S. Shalev-Shwartz and S. Ben-David 2014]: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press. 2014. [E. Esposito, R. Colomboni, A. Paudice]: An Improved Uniform Convergence Bound with Fat-Shattering Dimension. Information Processing Letters. 2025.
Summary: The paper proposes to use the median-of-means estimator to (uniformly) estimate the mean over a whole real-valued function family, with respect to some unknown distribution with bounded $(1+p)$-th moment that we only get sample access from. The authors give an analysis of the maximum estimation error, under the assumption that the function family satisfies some approximability property with a small cover. They then applied the result to $k$-means clustering and to regression problems. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I didn't check the proof details, but the high-level proof strategy does look like it should work. I have no correctness concerns. Experimental Designs Or Analyses: N/A Supplementary Material: I only looked at some bits of the appendix when trying to understand the proof strategy better. Relation To Broader Scientific Literature: The uniform mean estimation problem is well-studied in the literature, both in the finite variance case (or bounded variance case, for Catoni-style estimators) and in the finite $1+p$-th moment case. Methods have been proposed, based on median of means, trimmed mean, and Catoni's estimator. This paper uses a symmeterization argument different from prior works. A small personal gripe (though I don't actually reduce my score based on this) is the use of median-of-means. We know that the estimator is awful both theoretically and empirically, in the vanilla mean estimation problem (without any function classes). Both its finite sample performance and its (fix $\delta$, take $n \to \infty$) asymptotic performance are off by constants from optimal (definitely in the $1+p = 2$ case), which show up empirically. On the other hand, trimmed mean is at least asymptotically efficient, and the recent Lee and Valiant estimator is optimal in the constants both finite-sample and asymptotically. All of these other estimators are also minimax-up-to-constants optimal in "heavy-tailed" settings too, so it doesn't seem very good to still talk about median-of-means still, as clean and simple an idea it is. Essential References Not Discussed: No. Other Strengths And Weaknesses: While the paper clearly *explains* the proof, I don't think it does as good a job with *motivating* the analysis and assumptions. Let me walk the authors through what I was thinking when reading the paper, including an initial misunderstanding, the subsequent confusion and a partial resolution. Hopefully this can help improve the technical narrative in the paper. 1. Read the main sample complexity result. The form of the sample complexity looks easy to interpret -- it looks like a covering number term inside the log, so at this point I'm expecting a covering on the function class, and then a union bound net argument + approximation guarantees in between net elements. 2. I checked the assumption. The notion of D-discretization looks like what one would expect -- each $f$ gets mapped to some $\pi f$ in the net, so that $\pi f$ approximates $f$ well (Definition 3.1 has the sum of absolute differences over the samples being small, which is a strong condition). Then I was wondering, but why do we need the *three* sample sets? It clearly has something to do with the symmeterization argument that was foreshadowed, but why do we need that? 3. In fact, at this point I was wondering, why not do the obvious thing of taking a union bound over the net elements (with the net over the function class), and then use D-discretization to fill out the rest of the function class? 4. It took a while before I saw that the net in fact depends on the set of samples, which is why the simpler net argument fails. The D-discretization approximation condition is weaker in quantification than what one expects for a net argument. The key quantification difference is somewhat buried, and as a reader, it would be really helped if the difference was highlighted and emphasized. 5. But this still leaves the question, why was the weak notion of D-discretization used (which then seems to necessitate the complicated symmeterization), instead of the stronger "we have a single net that works for the entire function class (with high probability)" quantification? Is it because the latter stronger notion is impossible to prove for the applications at hand? If so, why, and shouldn't the applications then be the main point of the paper instead of just "median-of-means can be used for uniform mean estimation"? ============= **Post-rebuttal discussion**: Thank you, this is exactly the sort of discussion I was looking for (as someone who hasn't paid too much attention to covering-based results), that prior works haven't found sample-independent covers. I have now raised my score, under the assumption that the authors will include this technical motivation and discussion of quantifiers in the paper. Other Comments Or Suggestions: N/A Questions For Authors: Please clarify the technical motivation as discussed above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We first thank the reviewer for carefully reading the manuscript and providing constructive feedback. **Relation To Broader Scientific Literature:** We would like to comment on the reviewer's concern about the relevance of MoM in light of novel more refined estimators. We start noting that, similar to the MoM estimator, even the best-known bounds for the trimmed mean suffer from sub-optimal constants in the single mean task (see Oliveira, Ornstein, and Rico 2025, Theorem 1.1.1, for the case $p=2$). In addition, MoM is widely used as a sub-routine of other methods, including in the optimal Lee and Valiant estimator. Furthermore we notice that in the uniform convergence setting, differently from the single mean estimation case, it isn't even clear what the optimal constants are. Therefore, we believe that it still makes sense, to analyze the MoM in the more general uniform convergence setting. We thank the reviewer for eliciting such an interesting discussion. **Strengths and weaknesses:** We appreciate the reviewer’s efforts in walking us through their experience of reading the manuscript and highlighting that the current introduction of the $\mathcal{D}$-discretization is suboptimal. Regarding point 5), we would like to provide some high-level intuition about why we chose this weak notion of a net. As the reviewer pointed out, the discretization allows consideration of realizations of the samples $X_{0},X_{1},X_{2}\in (\mathcal{X}^{m})^{\kappa}$ to estimate any function $f\in \mathcal{F}$ except on a small fraction of the $\kappa$ subsamples $X_{0}^{i},X_{1}^{i},X_{2}^{i},$ where the approximation of $f$ can be arbitrarily bad. Since we are considering heavy-tailed distributions, we cannot expect that all subsamples will allow us to estimate the function accurately, thus why we did not see how to show the result with out this definition of the discretization allowing for the discretization to "fail" on a small number of subsamples. More specifically, in the case of k-means, we can assert that most of the mean estimates on the subsamples $X_{0}^{i},X_{1}^{i},X_{2}^{i},$ are small, but we cannot make any claims about the remaining mean estimates. With knowledge that the mean estimates on most subsamples are small, we can discretize the functions on these subsamples and disregard the discretization on the remaining ones. A similar argument is used in the proof of regression. As alloted to earlier we did not see a way to prove the theorem without this weaker notion of a net. We again thank the reviewer for emphasizing the need for a more detailed explanation of the discretization, which we will improve in the next version. If the reviewer has any additional comments, we would be happy to address them. --- Rebuttal Comment 1.1: Comment: I thank the authors for engaging in this discussion. I'm still a little bit confused though: I already understand that the index $i$ doesn't cover all of $[\kappa]$ (unsurprisingly for heavy tailed distributions we don't expect all the groups to concentrate well). My main question was rather, why is the cover $F_{(\epsilon,m)}$ chosen based on the samples $\mathbb{X}_0, \mathbb{X}_1, \mathbb{X}_2$, instead of fixed independently of the samples? If $F_{(\epsilon,m)}$ were hypothetically fixed independently of the samples, I think one could just argue that MoM estimates the functions in $F$ well with high probability by a union bound, and that the hypothetical Definition 3.1+3.2 imply that we can handle the rest of the function class from the net using the approximation? Am I missing something major? If not, then I believe this captures my main confusion: why does the cover $F$ need to depend on the samples $\mathbb{X}_0, \mathbb{X}_1, \mathbb{X}_2$, making the above simple argument fail? If the authors can explain this to me and incorporate this technical motivation into the paper, then I'd very gladly raise my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the engagement in the rebuttal. If we understood correctly, the reviewer is asking why the cover $F_{(\varepsilon, m)}$ is dependent on the realization of the sample, rather than being fixed beforehand, thus holding for all sample realizations. The answer is the following: it is possible to use a sample-independent cover, as long as you can find one since such a cover fulfills the requirements of Definition 3.1 and 3.2. As illustrated by sample-independent covers being captured in Definition 3.1 and 3.2, we see that the latter are more general notions leading thus to a more general result. The reason we went through these notions, is that canonical definitions of covering, go through sample-dependent covers (see for example Definition 27.1 in [S. Shalev-Shwartz, S. Ben-David 2014] where vectors should be thought as the values taken by the functions on the sample). Furthermore, to our knowledge, covering results guaranteeing the existence of a cover, are sample/distribution dependent: see for instance Lemma 7 in [A. Kupavskii and N. Zhivotovskiy 2020], where they state the bound in terms of packing (which yields a cover as well). We notice here that the cover is in terms of the uniform distribution over the given sample. For a further example, see Corollary 5.4 in [M. Rudelson and R. Vershynin 2006] where one has to take the distribution of the Corollary as the uniform over the sample to get a cover. Here one could also notice that, right after the symmetrization step, the empirical process of interest is (effectively) indexed by the projection of $\mathcal{F}$ onto the sample, which is arguably a simpler object compared to the one indexed by $\mathcal{F}$. This is also reflected in the definition of common complexity measures (including VC-/Pseudo-/Fat-Shattering-dimension) that indeed are related to the size of sample-dependent covers of $\mathcal{F}$. A related observation is that, since being sample-independent places stronger requirements on the cover, it is harder to find such covers. Indeed, beyond special cases (e.g., linear functions with bounded input and bounded weights), we are not aware of finite sample-independent covers for general function classes nor it is clear how to find them. Finally, we remark that these observations do not rule out the possibility of getting sample-independent covers, but highlight the fact that sample-dependent covers are the canonical approach to the discretization of $\mathcal{F}$, and as mentioned sample-independent covers are a special case of sample-dependent covers: thus this being the reason for the sample dependent notions of Definition 3.1 and 3.2. We hope that we where able to address the reviewer's comment else we would be happy to clarify further. We will include this discussion in the paper. **References:** [S. Shalev-Shwartz and S. Ben-David 2014]: *Understanding Machine Learning: From Theory to Algorithms.* Cambridge University Press. 2014. [A. Kupavskii and N. Zhivotovskiy 2020]: *When are epsilon-nets small?* Journal of Computer and System Sciences, 2020. [M. Rudelson and R. Vershynin 2006]: *Combinatorics of random processes and sections of convex bodies.* Annals of Mathematics. 2006.
Summary: In this paper, the authors tackle the problem of uniform mean estimation under heavy-tailed noise. Considering a set of functions, and a random variable, they analyze the sample complexity of providing a uniformly consistent estimation of the mean of the functions evaluated in the random variable. Claims And Evidence: All the claims are supported by proofs. Methods And Evaluation Criteria: There is no experimental campaign. Theoretical Claims: I quickly went through the proofs. All of them seem correct and the results are reasonable Experimental Designs Or Analyses: There is no experimental campaign Supplementary Material: I quickly went through most of the proofs. Relation To Broader Scientific Literature: Existing literature deal with uniform mean estimation in the presence of non-heavy-tailed noise. This work advances the known results in the field in this sense. Essential References Not Discussed: All the essential references have been discussed. Other Strengths And Weaknesses: The paper presents the first algorithm tackling uniform mean estimation in the presence of heavy-tailed noise. The proofs seem correct and there is a nice technical work. In some points the reading is not fluent, especially in the first sections, I would recommend stating the main results before giving an intuition of their proof. Moreover, I would suggest to "light" some constants to the closest one-point decimal value, at least. Just to make the text clearer. Other Comments Or Suggestions: See above. Questions For Authors: I have no relevant questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read the manuscript and provide feedback. We also appreciate the reviewer’s suggestions for improving the organization and presentation by first giving the theorems and then the proof sketch. We also make a brief remark on emphasizing that we are not the first to provide uniform convergence bounds under heavy-tailed noise, as described in the manuscript in the **Related Work** section. We differ from previous work due to the focus on the *sample complexity* of the problem instead that on the estimation error. Thanks to a new analysis technique, and due to the introduction of a novel complexity measure that, differently from previous work (focusing instead on the Rademacher complexity), is log of the size of the *relaxed* version of a discretization. Our main result (Theorem 3.4) enabled us to give improved sample complexity bounds in the important cases of k-means clustering and linear regression with general losses, which is instead unclear how to derive from previous work. If the reviewer has any further comments, please let us know, and we will do our best to address them.
null
null
null
null
null
null
Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Accept (spotlight poster)
Summary: This paper explores the phenomenon of task superposition: when presented a mixture of in-context examples corresponding to different tasks, the output probability distribution of an ambiguous query shows sensitivity to the different tasks and the relative proportion of the examples under different tasks. The paper also investigates this phenomenon in models trained from scratch on synthetic tasks, and relates to work in task vectors by showing the task vectors can be combined to shift the output probability. Claims And Evidence: Overall, the paper is very clear and the results are intuitive. I did find the conclusion that "transformers can in-context learn multiple tasks in superposition even if trained to in-context learn one task at a time" a bit over-claiming, as the task sets being learned or evaluated are highly related, to the extent that they all use the same input. For example, the variants of the addition task can be regarded as just one addition task with different output languages, so an ICL-updated posterior of response types in some sense doesn't seem surprising. Moreover, it's unclear if the models mechanistically treat the different tasks as separate tasks -- for example, it could perform the same internal computation for the addition task, but the presence of numbers in other languages would push the output distributions for those languages higher. Another concern/confusion I had is with the "K heads for K tasks" capacity claim. I suspect that due to some tasks sharing significant components with other tasks, models with K heads may easily learn more than K tasks, depending on how one defines a "task unit". And task boundaries in a continuous space may be intrinsically ambiguous. Methods And Evaluation Criteria: Related to the above, I find the results more interesting on the less "knowledge/retrieval"-like task sets, e.g. copying operands vs. adding, and taking the first/last letter + capitalize. I think if the evaluation can include more complicated or procedural tasks that are less knowledge/retrieval-based, it would make the phenomenon clearer and much more interesting. Theoretical Claims: No. Experimental Designs Or Analyses: Overall the design and analyses seem fine. Supplementary Material: No Relation To Broader Scientific Literature: This paper builds on prior studies of ICL and task vectors and provides some evidence on task superposition. Essential References Not Discussed: n/a Other Strengths And Weaknesses: I find the results clear and intuitive, and I appreciate that the authors studied this issue in different settings (e.g. with training from scratch). At the same time, I feel unclear about what to take away from the paper besides the observations. I think there is a space to dive deeper to understand the phenomenon better. It would be nice if the authors could more clearly discuss the implications of this work -- perhaps a rational posterior update argument? Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer fgkh, We sincerely appreciate your thoughtful feedback on our paper. Below, we address the specific concerns raised in the review. **More complicated tasks** Thank you for your suggestion. For more complicated task, e.g., grade-school Math, the input $x$ is a Math question and the task answer $g(x)$ usually includes a long chain-of-thought (CoT) reasoning. Note that in this case, there can be multiple equivalently correct ways to solve the given problem using different CoTs, so the task answer $g(x)$ is not unique. Therefore, the task (solving a Math problem) is not a well-defined function from $\mathcal{X}$ to $\mathcal{Y}$ in this setting and it will be hard to measure the probability of task answer given prompt ($\mathbf{P}(g(x)\mid\texttt{prompt})$). However, we think that it would be an interesting future direction to study task superposition on more complicated tasks. **Implications of this work** In this work, we show that LLMs are capable of simultaneously solving distinct tasks from in-context examples. Our work contributes significantly to the fundamental understanding of LLMs and highlights a critical area for future research -- developing decoding strategies that can maintain the model’s multi-task state throughout the generation process. Strikingly, we find that a recent work [1] offers some hope towards this direction. In particular, [1] propose a method to let LLMs generate continuous thoughts as reasoning states. Using this method, given a logical reasoning task that requires exploration, LLMs can explore multiple paths (each path as a sub-task) at the same time by encoding "a distribution of different traces into the continuous thoughts," which is a superposition of intermediate results from different paths. We hope our work could shed light on future studies on superposition in LLMs. We will add more discussion on the implications of our work in our next revision. We are happy to elaborate further if you have any remaining concerns. **References** [1] Hao, Shibo, et al. "Training large language models to reason in a continuous latent space." --- Rebuttal Comment 1.1: Comment: Thank you for the response! Adding connection to the latent continuous reasoning states would be valuable. My concern about the claims surrounding task separability still remains. I still think it would be additionally valuable to more carefully unpack this issue in the paper to help interpret this phenomena and the relation to the architecture (i.e. "K heads for K tasks" capacity). Given that, I would like to remain at the current score. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! Also sorry for omitting the "K heads for K tasks" capacity concern. Here the purpose of Theorem 1 is to show that task superposition is well within the expressive power of Transformers and Transformers can efficiently represent the superposition mechanism using a small number of layers. We think it is an interesting future direction to find the optimal bound on how many task can the model perform with K heads. We will sure add the discussion on the connection with latent continuous reasoning and add more discussion on interpreting the task superposition.
Summary: The authors show large language models can naturally perform multiple, distinct tasks simultaneously, even when they were only ever trained on one task at a time. They describe this phenomenon akin to 'superposition' which has been shown in previous work in the settings of having multiple tasks while performing in-context learning. Findings include: -Despite being trained with one-hot, single-task examples, the models’ internal representations blend distinct task-specific computations when prompted with a mix of in-context examples. -Experiments where the authors patch in convex combinations of individual task vectors reveal that the output probabilities vary smoothly as the interpolation parameter changes. -(Appendix) The paper also shows that larger models can handle more tasks in parallel and align their output distributions more accurately with the mixture of in-context examples. ## update after rebuttal I remain a proponent of accepting this paper. Claims And Evidence: Yes, the methods and evaluation criteria support the claims. Methods And Evaluation Criteria: Yes, the methods make sense based on the claims in the paper. Theoretical Claims: Did not check the correctness of Theorem 1, but ability of to only perform K tasks with K heads per attention layer seems low. Wouldn't this also depend on the dimensionality of the activation space as well. The number of tasks that can superposed should be larger based on this work: Superposition of many models into one, https://arxiv.org/abs/1902.05522. If you assume task orthogonality, the activation space can be partition into task specific dimensions and potentially contain far larger if you assume near-orthogonality. Experimental Designs Or Analyses: The authors provide prompts that mix examples from different tasks—such as numerical addition in multiple languages or country capital/continent identification—and measure the output probabilities. By training a small transformer model on simple, single-task retrieval tasks and then testing it with mixed-task prompts, the experiment demonstrates that even when trained with one-hot targets, the model’s internal representations can blend different task-specific computations. In another experiment, they "patch in" a convex combination of task vectors, each corresponding to a pure task, into the model. As the interpolation parameter is varied, the output probabilities shift smoothly between those of the individual tasks. This controlled manipulation underscores that the internal representation is a weighted mixture of the separate task representations. Supplementary Material: I only looked at parts of the supplementary relevant to the main text. Relation To Broader Scientific Literature: Superposition in the various context of neural networks is an important phenomenon and the observation of it occurring in the setting of in-context-learning shows that the property is even more prevalent than previously realized. Essential References Not Discussed: All the references I am aware of are referenced. Other Strengths And Weaknesses: It would be helpful to contrast the task vectors applied to the toy setting to the real world LLMs to perform the same tasks to see if the task vector embeddings in real world LLMs deviate from those in the toy setting. This is fairly minor and just something that would be interesting to compare against. Other Comments Or Suggestions: "Gray dashed line in each figure is the ideal probability if we assume the model perfectly calibrates its output distribution to the distribution of in-context task examples. With uniform distribution of task examples, the dashed lines are at 0.25 (4 tasks setting) and 0.33 (3 tasks setting)." I could not fined the dashed line in this figure. "we select two tasks ad we provide the model with prompts" Questions For Authors: How is this considered superposition as compared to good calibration? Good calibration being defined as the uncertainty/ambiguity of the completion because the model is given multiple in-context examples. I'm not sure if I fully agree with the argument that superposition is distinct from calibration even if the models are trained in a one-hot setting. Perhaps the distinction should be: Calibration is about the model’s output uncertainty aligning with the in-context mix, for example, when you supply a 50/50 mix of two tasks, a well‐calibrated model would ideally assign about 50% probability to each task’s answer. Calibration, therefore, is an observable feature of the output distribution. Superposition, on the other hand, refers to what happens inside the model. Even when the model isn’t perfectly calibrated at the output, its hidden representations encode a mixture of task-specific computations. So the framing of this work is that superposition of representations results in the observation of calibration in the outputs of the model. What are ways of observing this form of superposition 'in-the-wild'? For example does this occur for natural language sequences in data that is not based on data with in-context learning examples? And how does that deviates from the observations of the toy experiment (a small transformer (a GPT‑2 variant with about 14 million parameters) is trained on a family of simple retrieval tasks). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer TgC8, We sincerely appreciate your thoughtful feedback on our paper. Below, we address the specific concerns raised in the review. **Theoretical claims** Thank you for bringing up this work. [1] focus on superposition with NN while we focus on superposition in Transformers, by weighting of the answer proportionally to the number of in-context examples corresponding to this tasks. It is correct that the number of tasks could affect the ReLUs used as well, depending on which portion of the Transformer implements a task. Indeed when the ReLU layers are used then the width could be ~$Kd$. The purpose of Theorem 1 is to show that task superposition is well within the expressive power of Transformers and Transformers can efficiently represent the superposition mechanism using a small number of layers. It will be an interesting future direction to find the optimal bound on how many task can the model perform with K heads. **Task vectors in toy setting** Thank you for your suggestion. We extract task vectors from our small trained-from-scratch models using the same pipeline as the pretrained models. While our task vector extraction works well for large, pretrained models, we could not find task vectors that work well for our small models. For example, in [Figure I](https://github.com/Q16A/icl-sup/blob/main/Figure_I.pdf) we plot the accuracy on task `ret2` and `plus2` when using vectors extracted from different layers and observe that the maximum accuracy we can get is lower than 0.2 (while for large real-world pretrained model such accuracy is usually near 1). A possible explanation is that, while task vectors for real world LLMs are extracted from a specific layer, for the small models, the feature that represents a task is likely not localized to a specific layer (as indicated in Figure I), which means that we need to modify how we extract the task vectors for small models. We believe this is an important area for further research. **Calibration and superposition** Thanks for providing the insights. We agree that superposition is more about what happens inside the model and calibration is more about aligning model's output with the in-context task examples distribution. > So the framing of this work is that superposition of representations results in the observation of calibration in the outputs of the model. That is correct. We will add more discussion on this in our next revision. **Superposition "in the wild"** > What are ways of observing this form of superposition 'in-the-wild'? For example does this occur for natural language sequences in data that is not based on data with in-context learning examples? Two papers [2, 3] that are released a few days ago also capture the superposition phenomenon in LLMs. In particular, the authors found that when asked to do two-digit addition, LLM will split the problem into two sub-task paths: 1. one path estimates the rough range of the answer; 2. another path finds the exact last digit of the sum. The model internally employs parallel computational paths and then merges the result together. This indicates that superposition can be commonly found in LLMs and we hope our work can shed light on future research in superposition in LLMs. **Typos** Thanks for pointing this out. We will fix them in our next revision. We will add the dashed line as well. We are happy to elaborate further if you have any remaining concerns. **References** [1] Cheung, Brian, et al. "Superposition of many models into one." Advances in neural information processing systems 32 (2019). [2] Ameisen, et al., "Circuit Tracing: Revealing Computational Graphs in Language Models", Transformer Circuits, 2025. [3] Lindsey, et al., "On the Biology of a Large Language Model", Transformer Circuits, 2025. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my questions. I have no additional comments and believe the authors will implement the changes they mention in the rebuttal. I remain a proponent of accepting this work. I am writing a response in case the authors have any additional comments or discussion. (Unrelated to the authors or this work: This is a truly poorly constructed discussion format for this year's ICML conference) --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We will sure add the changes in our next revision.
Summary: This paper investigates the "task superposition" phenomenon of ICL, i.e., when multiple tasks simultaneously appear in the context, the model can assign non-negligible output probabilities to more than one task. Additional findings and contributions include: 1. Pretrained LLMs have bias on what task to perform when given multiple tasks in a context. 2. On simple retrieval and addition tasks, transformers can in-context learn multiple tasks in superposition even if trained to in-context learn one task at a time. 3. Theoretically, there exists a seven-layer transformer with sufficiently large embedding dimensions and $K$ heads per layer that can perform $K$ tasks in superposition. 4. Adding the convex combinations of task vectors of individual tasks can induce a similar output distribution to that induced by using a superposition input context. Claims And Evidence: Yes. Methods And Evaluation Criteria: N/A. This paper don't propose new methods. Theoretical Claims: I read the proof sketch outlined in Section 6, but I didn't thoroughly check the proof in Appendix E.4. Issues: the proof is a constructive one, i.e., there exists a choice of construction that can lead to the conclusion, but there is no guarantee that Transformers will definitely implement such a construction to realize superposition ICL predictions. Experimental Designs Or Analyses: Yes. The soundness and validity of the experimental designs and analyses are good. Supplementary Material: I reviewed all contents in the supplementary material except the mathematical proof. Relation To Broader Scientific Literature: The key finding that LLMs can perform task superposition reveals possibilities for designing real-world applications of LLMs such as automatically inferring the desired task given a complex instruction context, etc. Essential References Not Discussed: I notice that a recent work in ICLR 2025 [1] may further explain the finding in Section 4: "LLMs do not calibrate their output distribution perfectly with the in-context task example distribution and they still have bias on what task to perform". [1] investigated how ICL will select the training task priors to make predictions based on the test context and the pretraining distribution, and theoretically revealed that three factors will determine the task selection of ICL prediction: 1) ratio of each task in the training corpus; 2) the test error of each task on the test in-context examples; 3) the distributional distance between the training and test context inputs $x_i$. I believe further discussing how these task-selection mechanisms in [1] will function under your multi-task ICL setting would help to refine this part of your work. [1] Can In-context Learning Really Generalize to Out-of-distribution Tasks? Other Strengths And Weaknesses: **Strength:** 1. The studied task superposition problem is novel in the ICL literature. 2. The experimental designs are sound. The experimental results are persuasive to reveal the ability of Transformers to perform multiple tasks given a mixed-task context. 3. The authors provide theoretical evidence for the possibility of performing task superposition in Transformers. **Weakness:** The main weakness is that the practical value of the paper is very limited. Although the studied problem is novel and the analyses are sound, I'm concerned about how the findings in the paper could be beneficial to any real-world applications. Not only did the authors design any new methods, they didn't provide a concrete application scenario in which this task superposition capability could be utilized. I appreciate the novelty in the studied problem and the comprehensive empirical analysis, but I believe that this paper could be further improved in its practical application value. Other Comments Or Suggestions: In Theorem 1, "A seven layer" -> "A seven-layer". Questions For Authors: Could you show me a specific real-world scenario where keeping the LLM in a multi-task state will be beneficial? I recognize that generation collapse you mentioned is indeed an issue seemingly to be caused by not maintaining a multi-task state. However, I'm curious about whether there are more advanced applications of harnessing task superposition, such as improving the instruction following ability over multi-task instructions, or improving the reasoning ability in some multi-task interaction scenarios. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer FHnf, We greatly appreciate your constructive feedback on our paper. Below, we address the specific concerns raised in the review. **Theoretical claims** > The proof is a constructive one, ..., there is no guarantee that Transformers will definitely implement such a construction to realize superposition ICL predictions. It is true that our result is an existential result. The purpose of Theorem 1 is to show that task superposition is well within the expressive power of Transformers and Transformers can efficiently represent the superposition mechanism using a small number of layers. In Section 7 we further investigate the underlying mechanism and empirically show that LLMs combine task vectors during task superposition. We think it will be an interesting future direction to theoretically show the exact underlying mechanism for task superposition for pretrained LLMs. **Discussion with Wang et al.** Thank you for bring up this work. We think Wang et al. [1] is indeed related to our work. [1] shows that ICL will identify the "most suitable pretraining meta-distribution based on the test error and input distribution discrepancies" and operates within that meta-distribution. The bias we observe during task superposition (LLMs do not calibrate their output distribution perfectly with the in-context task example distribution and different LLMs have different bias) can be explained by different pretraining distributions of LLMs, which [1] investigated in. However, we would also like to clarify that algorithm-selection mechanism does not fully explain task superposition. In the setting of [1], at inference time, all examples in the input are from a single task and LLMs select a task in the pretraining distribution that has the lowest test error with the given task; in our setting, examples in the input are from multiple qualitatively different tasks and the model predicts a superposition of different task answers. We will add the discussion of [1] in our next revision. **Practical value is limited** > Not only did the authors design any new methods, they didn't provide a concrete application scenario in which this task superposition capability could be utilized. Could you show me a specific real-world scenario where keeping the LLM in a multi-task state will be beneficial? A recent paper [2] shows a practical application of the task superposition capability. [2] propose a method where LLMs can generate continuous thought as reasoning state that simultaneously encodes intermediate results from multiple reasoning paths. For example, * Figure 4 of [2], when asked with a Math problem that can be solved by multiple ways (and we can view each way of solving the problem as a sub-task), LLM "encodes a distribution of different traces into the continuous thoughts." * Figure 5 and 6 of [2] further shows that on some logical reasoning tasks, LLMs can explore multiple paths at the same time as a breath-first-search algorithm. This outperforms traditional chain-of-thought method that can only explore one path at a time. Moreover, we believe that explicitly characterizing the phenomenon of task superposition is valuable in its own right as it help us better understand LLMs. * Our observations align with the "simulator-in-superposition" hypothesis [3, 4] that emerged with the advent of GPT-3. This hypothesis suggests that LLMs can simulate multiple potential continuations or behaviors simultaneously, reflecting a superposition of different skills or tasks. By demonstrating that LLMs can internally represent and process multiple tasks in parallel when provided with mixed in-context examples, we provide empirical support for this theoretical framework. * Two papers [5, 6] released a few days ago also capture the superposition phenomenon in LLMs. [5, 6] found that when asked to do two-digit addition, LLM will split the problem into two tasks, internally employ parallel computational paths and then merge the result together. This indicates that superposition can be commonly found in LLMs and we hope our work can shed light on future research that study superposition in LLMs. **Typos** Thank you for pointing this out. We will fix the typos in our next revision. We are happy to elaborate further if you have any remaining concerns. **References** [1] Wang, Qixun, et al. "Can In-context Learning Really Generalize to Out-of-distribution Tasks?." [2] Hao, Shibo, et al. "Training large language models to reason in a continuous latent space." [3] Reynolds, L., & McDonell, K. (2021). Multiversal views on language models. [4] moire. Language models are multiverse generators, January 2021. https://generative.ink/posts/language-models-are-multiverse-generators [5] Ameisen, et al., "Circuit Tracing: Revealing Computational Graphs in Language Models", Transformer Circuits, 2025. [6] Lindsey, et al., "On the Biology of a Large Language Model", Transformer Circuits, 2025. --- Rebuttal Comment 1.1: Comment: Thank you for elaborating on my concerns. Most of my questions are addressed. Moreover, I find the connection between task superposition and the multiple reasoning paths in latent CoT interesting. A deeper exploration of this connection can enhance the practical value of your work, especially in the current context where latent thinking and reasoning are gaining significant attention. I'm willing to raise my rating to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We will sure add more discussion on this in our next revision.
Summary: This paper introduces the novel empirical finding that when presented with a context that contains a mixture of different tasks, an LLM will respond as though it is performing a superposition of those tasks. By training very simple small GPT-2-style models from scratch on artificial tasks where every training context contains only a single task, they give strong evidence that this superposition effect is a structural properly of this style of neural-network (i.e. is a part of its inherent inductive bias) and not coming from the specifics of its training data. They connect this to the task-vector point of view (coming from the linear representation hypothesis for tasks) by showing evidence of linear combinations of tasks being active at the same time. Claims And Evidence: Yes. But I didn't check the proof in the supplemental for the theoretical construction. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. The experiments seem unambiguously clear. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is nicely connected to the discussion around Simulcra which have unfortunately (for academics) not taken place purely through papers. The fact that the authors reference blog posts on this topic is very good for the academic community. Essential References Not Discussed: There is a collection of academic works on actually learning mixtures. For example: https://dl.acm.org/doi/full/10.1145/3583680 It would be nice to connect to this literature. Other Strengths And Weaknesses: Very clear writing. Other Comments Or Suggestions: You have a bug in Figure 1. The task examples for b don't match the stated task. Questions For Authors: Does the ordering of tasks in context matter? It feels like you used random shuffles. But given the "U-shaped curve" of how models (and humans) seem to pay more attention to the beginning of the context as well as the most recent parts, does this also influence the superposition? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer UuyG, We greatly appreciate your constructive feedback on our paper. Below, we address the specific questions raised in the review. **Figure 1(b)** Thanks for pointing this out. We will update Figure 1(b) in our next revision. **Does ordering of tasks matter?** In our setting of in-context learning, the order can affect how the model perform. For example, consider a scenario with three tasks, each presented in the prompt through 10 examples arranged sequentially: first 10 examples of task 1, followed by 10 examples of task 2, followed by 10 examples of task 3, and then the query. In this case, the model tends to assign higher probabilities on task answer of task 3. This is because there are 10 in-context examples of task 3 right before the query, and the model just follow the same pattern. However, if we randomize the order, the model won't just follow the task examples right before the query but it calibrates its output probability distribution based the in-context task example distribution in the prompt. We are happy to elaborate further if you have any remaining concerns. --- Rebuttal Comment 1.1: Comment: I would hope that the final version includes some quantitative exploration of what shapes the distribution over tasks as a function of the position distribution of tasks within context. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We will add analysis on this in our next revision. We will also cite more literature on learning mixtures as you suggested.
null
null
null
null
null
null
Prune 'n Predict: Optimizing LLM Decision-making with Conformal Prediction
Accept (poster)
Summary: The paper proposes conformal revision of questions (CROQ), which revises multiple-choice questions (MCQs) by narrowing down the prediction set. Additionally, the paper provides a corresponding routine for learning a CP score that aims to minimize set size under the coverage constraint. The experiments of the paper cover MMLU, ToolAlpaca, and TruthfulQA, using multiple LLM models. The results show CROQ improves accuracy of test-time inference. Claims And Evidence: * Claims are clearly stated in experiments section and are supported empirically. * Additionally, the basic premise that pruning response options leads to an improvement in accuracy is nicely shown in Figure 1. Methods And Evaluation Criteria: * No issue here. Theoretical Claims: * No issue here. Experimental Designs Or Analyses: * No issue here. Supplementary Material: * I read through the supplementary material. I did not see any attached code. Relation To Broader Scientific Literature: * The main contribution of the paper to the literature appears to be the idea of pruning answers according to the prediction sets. The secondary contribution is in the learning of a function for the conformity score. I am not aware of previous work that has used CP for pruning. Essential References Not Discussed: * I think this has been covered as far as I can tell. Other Strengths And Weaknesses: ### Strengths * The paper is very well-structured. It was made clear early in the paper what the idea of the work was and how it was going to be achieved. The description of the relevant topics such as CP was nicely explained at a level suitable for the paper. Sometimes CP can be poorly explained and overcomplicated. * The idea to prune based on the prediction sets is a strength. * Experimental design, including clear hypotheses and statistical significant tests are a strength. ### Minor Weakness * The experimental results seem to point to a side effect, (seen in Figures 4 and 5), where when there are fewer choices to select from at the start, it seems like the utility of CROQ might result in worse accuracy. This can be seen from the MMLU-4 and ToolAlpaca-4 data sets. Other Comments Or Suggestions: * See below. Questions For Authors: * Was there any particular reason that $g(\cdot)$ used $tanh$ nonlinearities? * Is the effectiveness of CP-OPT more dependent on whether the distribution of the training set matches the testing set compared to using the logits? Are there any results in the paper that might show this behaviour? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We are delighted with the positive feedback on our paper. We appreciate the recognition of strengths on *the ideas, presentation and soundness of empirical analysis*. Our response to the queries is as follows. **Why tanh for $g$ ?** While the choice of the function class $\mathcal{G}$ is up to the user, in general it makes sense to use a flexible non-linear function class. A multi-layer neural network with any activation function could be a good fit here. We chose tanh as its range is (-1, 1) and is known to have nice properties such as resulting in efficient training [1]. [1] https://cseweb.ucsd.edu/classes/wi08/cse253/Handouts/lecun-98b.pdf **On (mis)match between distribution of training and test set.** In this work we assume that the training, calibration and test data are i.i.d. (independent and identically distributed). Logit scores use calibration data to estimate the threshold for conformal prediction, which is used on the test data to create prediction sets. Thus, we evaluate both scores under the same conditions on the data. If we anticipate distribution shift in the test set, we could modify the CP-OPT objective to use distributionally robust optimization (DRO) techniques, so that it is robust to distribution shifts. This is beyond the scope of our work and could be interesting to explore in future. We hope our response resolves the queries. We are happy to answer any further questions you may have.
Summary: This paper proposes a method to enhance large language model (LLM) decision-making for multiple-choice questions (MCQs) and tool selection tasks using conformal prediction (CP). The authors introduce "Conformal Revision of Questions" (CROQ), which uses CP to identify and eliminate unlikely answer choices before re-prompting the LLM with the reduced set of options. They demonstrate that LLMs perform better when presented with fewer choices. Additionally, they propose CP-OPT, a score optimization framework that learns custom scoring functions to minimize prediction set sizes while maintaining statistical coverage guarantees. Claims And Evidence: The fundamental claim that reducing answer choices improves LLM accuracy is well-supported in Figure 1. This is exactly the case when we want to optimize tool use. However, the claim that CP-OPT produces smaller prediction sets than logit scores and that CP-OPT scores outperform logit scores when used with CROQ are not well-supported by experiments. The improvement over logic scores is often quite limited. Methods And Evaluation Criteria: In general, the method makes sense for MCQ settings. There are several issues: 1. The baseline comparison is limited to logit scores from LLMs. Comparison with other uncertainty quantification methods for LLMs is needed. 2. The selection of miscoverage rate α is arbitrary (set at 0.05 for most experiments). It could be better to provide clear guidance on selecting α. Theoretical Claims: There's no theoretical analysis of the convergence properties of the optimization procedure or guarantees that the learned score functions will approach the optimal ones. While the authors correctly cite the standard coverage guarantee for split conformal prediction in Proposition 2.1, they don't provide theoretical analysis of how their specific implementation might affect this guarantee in this setting or in practice. Experimental Designs Or Analyses: 1. For MMLU, the authors created versions with 10 and 15 options by adding options from other questions on the same topic. This artificial augmentation may not reflect natural MCQ distributions and could introduce biases. 2. The method should also compare with different score functions like UQ methods. Supplementary Material: I check the figures and tables in the appendix. Relation To Broader Scientific Literature: The paper adequately situates itself within the conformal prediction and LLM uncertainty quantification literature. It builds on prior work applying conformal prediction to LLMs and extends this to downstream MCQ task improvement. Essential References Not Discussed: The authors only talks about conformal prediction for LLMs in related works and lack important literatures of UQ and confidence callibration. Other Strengths And Weaknesses: Strength: The idea of using conformal prediction to revise MCQs is clear to me. The approach requires no fine-tuning, making it broadly applicable. Weakness: 1. The improvement is often limited. And the author should compare with other prompting methods like CoT, self-refine, etc. 2. This paper focuses solely on MCQ and tool selection tasks. In practical applications, open-ended QA tasks are much more common and often more valuable than multiple-choice formats. The CROQ method, which relies on pruning answer choices through conformal prediction, is fundamentally designed for settings with discrete, pre-defined answer options. This approach cannot be directly applied to open-ended QA tasks where the space of possible answers is effectively infinite. Other Comments Or Suggestions: 1. runtime and computational resource requirements should be analyzed 2. typos. e.g. "strengthens" in section 6 Questions For Authors: 1. How does the computational cost of CP-OPT compare to alternatives like self-consistency methods? 2. The improvements from CP-OPT over logits seem modest in many settings. Can you provide deeper insights into when and why CP-OPT significantly outperforms logits? 3. How does CROQ perform when combined with other LLM performance enhancement techniques like chain-of-thought, CoT SC, few-shot? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the detailed feedback and recognition of the clarity and broad applicability of our work. Our response is as follows, **Relationship to other UQ methods.** The reviewer correctly points out that there are other methods for quantifying uncertainty in the context of LLMs besides conformal prediction (CP), including methods for estimating and calibrating confidence. Our goal in this paper was to generate subsets of answer options with guaranteed coverage probability, which makes conformal prediction a natural framework. To the best of our knowledge, no other uncertainty quantification technique provides a similar guarantee. We emphasize that small prediction sets are valuable in and of themselves because they result in lower query costs, so this reduction of the space of answer options is an important feature of our procedure. [Please see the *Small conformal prediction sets reduce costs* section in the response to reviewer `z4Sk`.] We believe the flexibility in the choice of score function is part of the appeal of CP and CROQ. We agree that it will be important to investigate how CROQ works with other choices of score functions, which we leave for future work. **Choice of $\alpha$.** Please see the *Choice of $\alpha$* section in our response to reviewer `z4Sk`. **Theoretical guarantees.** We appreciate the reviewer pointing out that the relationship between Proposition 2.1 and our procedure may not be clear. We have reframed this proposition and its proof slightly to make it clear that our procedure enjoys this coverage guarantee. Regarding the optimization procedure for CP-OPT, we have pointed out (in lines 222-224, second column) that the empirical surrogates converge almost surely to their population counterparts. We have reworded the text to point out that this also holds for the cross-entropy term $\widehat{C}(g)$. We have also added intuition regarding the relationship between problems (P2) and (P1). We defer a more formal convergence analysis for future work. **Importance and Generality of the MCQ setting.** Please see the same section in our response to the reviewer `p4om`. **Computational cost.** As implemented in our paper, CROQ requires two queries to a given LLM. However, as noted by reviewer `grWo`, it's possible to cache the input such that the difference in cost between one query vs. two will be relatively minimal. Furthermore, as discussed in the *Small conformal prediction sets reduce costs* section in the response to reviewer `z4Sk`, it is possible to use a very cheap method to generate the scores such that the cost (both computational and literal) primarily derives from the query which produces the MCQ answer. For example, the user could use a pre-trained semantic embedding and then use the cosine similarity between the query and each response option as the conformal score. This cost is minimal compared to the cost of an MCQ query, and in general, we expect that it will be more than offset by the reduction in query cost due to the reduction in the number of answer options. Methods that involve self-consistency or self-refinement will, in general, require multiple queries, and therefore, we expect them to be more expensive, but we leave a full investigation for future work. However, we have added a discussion along the lines of the above to the paper appendix. **Magnitude of CP-OPT gains vs. logits.** While the advantages of CP-OPT over logits in terms of set sizes and accuracy are numerically small, we believe that the real-world difference can be substantial at scale when large numbers of users are querying an LLM repeatedly. The figures and tables in the appendix aim to provide a finer-grained view of when and how CP-OPT improves over logits. For example, we observe that CP-OPT, in general, yields more sets of size 1 than logits and that the accuracies vary as a function of set size (which indicates that set size is a good measure of overall uncertainty). We hypothesize that CP-OPT improves over logits precisely where logits are poorly calibrated, i.e., where an LLM is over- or under-confident. We plan to test this hypothesis in future work. We hope our response resolves the queries. We are happy to answer any further questions you may have. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. However, many of my concerns are still not addressed. For example, the method doesn't compare with other simple prompting methods for MCQ questions and other tool optimization methods for tool selection as baselines given that the improvements with CROQ are not quite significant. Next, the improvements increase as the the number of choices increase. However, adding artificial choices may not reflect natural MCQ distributions and could introduce biases. I will recommend the authors try some datasets with more choices like MMLU Pro. Besides, authors claim that using the cosine similarity between the query and each response option as the conformal score will significantly reduce the cost. But the performance of using this simple score function is doubted. --- Reply to Comment 1.1.1: Comment: Thanks for the response. We provide additional experimental results and clarifications below. **On our experimental setup.** We augmented the answer choices by randomly drawing answer choices from other questions, and on MMLU, where questions are labeled with topics, we sampled the additional answer questions from questions on the same topic. Thus, the choices were not arbitrary, and more importantly, the LLMs' baseline accuracy on the datasets with the inflated number of choices decreases substantially. This indicates that the choices introduced are effective distractors. **Evaluation on MMLU-pro.** We evaluated CROQ on the MMLU-Pro dataset with questions having 10 options. We observe that the baseline accuracy with the Phi-3 model is 36.4%, and we get a 3% relative improvement in accuracy with CROQ – a significant improvement on a 10-option dataset, particularly given that MMLU-pro contains much harder questions. We will include these results in the paper. In addition to this, we have results on a practical application where an agentic system needs to select the right tables for generating SQL for a given natural language query. Please see the response to reviewer `p4om` for details on this. **On Prompting Methods.** Our focus and main contributions are on the CROQ procedure that revises the question after pruning options using conformal prediction. For evaluation, we chose a simple and computationally efficient procedure for solving MCQ-type tasks. In this procedure, the forward pass is run on the model to obtain logits (scores) for the answer choices, and the choice with the highest score is selected. While CROQ can be used in concert with other prompting strategies, such as CoT, etc., we favored the above procedure for clean and computationally efficient evaluation in contrast to prompting methods, which can involve generating a large number of tokens to get an answer. The generated response also depends heavily on the choice of decoding strategy used. Using CROQ in conjunction with CoT could be an interesting direction for future work.
Summary: This paper is using conformal prediction sets to improve the performance of LLMs on multiple choice question answering tasks. In particular, the propose a framework which they first construct a prediction set and then re-ask the same question with the limited options in the set from the LLM. They then empirically show this method can improve the accuracy of the LLM in a variety of multiple choice tasks. They also, offer some optimization methods to improve the score function used in CP pipeline to promote tighter prediction sets which then improve the effectiveness of the proposed method. I have read other reviewers comments and i want to keep my score. My major concern, which is remained, is that i still do not understand why CP, its coverage guarantee, and pruning based on CP sets are of practical relevance here. I am very familiar with CP tools, and this does not make sense to me. To push such a narrative, there should be either, a very strong set of experimental setups, where the authors compare with a wide range of pruning techniques and other UQ ideas, or alternatively, a some form of theory, even minimal, that showcases CP sets are the correct tool for this problem. Claims And Evidence: The claims are clear but the evidence is not entirely convincing. It is not obvious if such a framework can actually improve the accuracy of the LLMs in a meaningful way. In particular, It is not obvious to me how "95 %" sets would be a meaningful notion in this problem, as you will be missing the correct label 5 percents of the time by design, and then forcing the LLM to pick among wrong answers. And then this raises the question of how to pick alpha or whether such a framework makes sense at all or not. Methods And Evaluation Criteria: I do not think the evaluation criteria is sufficient. Due to the concern raised above, I think there should be extra evaluation metrics to have a finer grained understanding of what happens when applying this framework. For instance, it might be the case, even though the overall accuracy is improved, there would a non negligible number of cases that the LLM might have give the correct answer originally, but now due to restricting to a set which does not include the correct label (which happens 5 percents of the time) the LLM is forced to get the wrong answer. Theoretical Claims: There is not much theoretical claims in the paper. Experimental Designs Or Analyses: ... Supplementary Material: yes i took a look at extra plots and tables. Relation To Broader Scientific Literature: Uncertainty quantification in LLMs is a very important and active field, and the idea of re-prompting the LLM after uncertainty quantification sounds interesting. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: My major concern is the lack of any theoretical or even some higher level intuitions/observations/discussions on why such a framework based on CP is the correct way of informing the LLM about its uncertainty. This framework puts a tradeoff between the informativeness of UQ with CP (when choosing alpha too small) vs some inevitable mistakes that we force to LLM (by choosing alpha large), and this does not sound like the correct tradeoff to look at. For instance, an immediate alternative to the proposed framework could be, we construct the CP sets, but instead of re-asking the question with the limited options, this time we append the set to the context window of the LLM and then explain the correct answer with 95 percents probability is in this set. How does this change the situation? is it better or worse? Other Comments Or Suggestions: ... Questions For Authors: ... Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for careful attention to the paper. Our response to the queries is as follows, **Choice of $\alpha$.** We set $\alpha$ to a single (fairly arbitrary) value of $0.05$ in the main body of the paper for simplicity of exposition, but $\alpha$ can be treated as a hyperparameter and tuned to maximize accuracy. We discuss this in the Appendix in section B.1, with results in Figures 4 and 5. As discussed in lines 73-83 (first column), we agree that $\alpha$ represents a tradeoff whereby larger values result in smaller sets with some probability of excluding the correct answer. If we set $\alpha = 0$, then we'd include all the answer options each time, which recovers the original MCQ setting, so in some sense, we've simply parameterized a tradeoff that we can then optimize for a downstream criterion of interest like accuracy. **Evaluation criteria.** We appreciate the reviewer's point. The additional figures and tables in the appendix provide a finer-grained view of (1) the distributions of answer set sizes produced by our procedure, and (2) accuracy conditional on set sizes. In addition, we have added tables that illustrate how often the CROQ procedure causes the LLM to "switch" from incorrect to correct answers and vice versa. We do indeed observe that our CROQ procedure causes the LLM to incorrectly answer some small proportion of questions which it initially answered correctly, but *this is more than offset in general by answers which it initially got incorrect and which it gets correct after CROQ*. **Small conformal prediction sets reduce costs**. In addition to the goal of improving accuracy, we note that smaller set sizes are generally desirable in and of themselves because fewer tokens in the prompt means lower query costs. This difference can be substantial in settings like text-to-SQL or other tool usage/API selection settings, where the text describing a given answer option can be extremely large. While in our experiments we used the same LLM both to generate the conformal scores for the purposes of constructing the conformal prediction set *and* to generate the final MCQ answers, it's possible to use a model cascade such that a small/cheap model is used to generate the conformal prediction sets and then the MCQ is passed to a larger/more expensive LLM to generate an answer. In ongoing experiments in a text-to-SQL setting, we observe that by using a model cascade like this, we are able to substantially reduce the overall query cost while preserving or improving downstream accuracy. Once again, these tradeoffs can be optimized by tuning $\alpha$. We have added a section to the appendix and a small reference in the main text discussing this. (In addition, please see the existing section B.2 in the appendix for some discussion of model cascades and cost tradeoffs.) **Why Conformal Prediction (CP) is a useful framework.** The idea of appending the conformal set to the original query is interesting and we believe would be worth investigating in future work. Our goal with the CROQ procedure however was to *reduce* the amount of uncertainty in MCQ-type queries, rather than to inform the LLM about its own uncertainty per se. We aim to reduce the likelihood that the LLM will be "distracted" by an available incorrect answer by pruning those answers (with high probability). This procedure is motivated by the simple empirical observation in Figure 1 that LLMs are more likely to answer correctly when there are fewer distractor options. Regarding theoretical justification, we emphasize that our procedure satisfies a coverage guarantee (Proposition 2.1), which means that the correct answer will be inadvertently removed at most a proportion $\alpha$ of the time. As discussed above, this is some sense simply a generalization of the vanilla MCQ setting with a parameterized tradeoff that can be optimized. We hope our response resolves the queries. We are happy to answer any further questions.
Summary: First, the paper observes that removing incorrect answer choices from the answer sets given to an LLM improves performance. This motivates conformal revision of questions (CROQ), a simple method to boost multiple choice QA (MCQA) on any model and any dataset by first asking a question to the model, building a confidence set of MCQA answers that includes the true answer with 1 - $\alpha$ probability, and then re-prompting the model with only the answers in the set as the given answer choices. The confidence sets are built using split conformal prediction. Next, the paper argues that current prediction logits are not explicitly optimized for producing good confidence sets, and proposes the CP-OPT objective to learn a small auxiliary head off of an LLM that can produce prediction scores that result in more useful (smaller) confidence sets. The equation P1 shows the objective for CP-OPT optimization, which minimizes the expected average confidence set size over the train set, subject to a expected confidence coverage constraint. P2 shows a surrogate objective that relaxes the constraints and relaxes step-wise functions to smooth sigmoids for differentiability. After optimizing with CP-OPT, the hold-out calibration set used for conformal prediction may result in smaller confidence sets at the required level of coverage. The authors experimentally verify the efficacy of CROQ over standard QA, the generally smaller confidence set size of CP-OPT over logit-based conformal prediction, and the resulting improvements of CROQ when using CP-OPT instead of logits. They also conduct additional ablations and experiments to understand the behavior of the new methods. Claims And Evidence: * "Our extensive experiments on MMLU, ToolAlpaca, and TruthfulQA datasets with multiple LLMs show that CROQ improves accuracy over the standard inference, with more pronounced gains when paired with CP-OPT." -- well-supported * The three hypotheses in Section 4 are well-explored, and the evidence given is generally sufficient to support them. * Other analysis claims are reasonable. Methods And Evaluation Criteria: Yes, the datasets are standard QA datasets common for QA tasks. The main metrics (accuracy, coverage, and confidence set size) are suitable for the two objectives of CROQ and CP-OPT, respectively (produce good QA performance and useful confidence sets). Theoretical Claims: There are not any proofs (mostly, the paper relies on theoretical arguments from other works). Experimental Designs Or Analyses: I reviewed the main paper experimental designs and the ablations/additional experiments presented in Figures 3-5 in the appendices. I am satisfied with the experimental design, although I do wish a little more attention was given to discussing some of the choices of hyperparameters for the CP-OPT loss objective. See my W3-4 in the Strengths and Weaknesses section. Supplementary Material: There is a substantive set of appendices. Code is not attached. I reviewed the appendices, but did not closely review the content of all prompting strategies nor all of the tables on pages 17-24. Relation To Broader Scientific Literature: While prior works (Vovk et al., 2005; Angelopoulos et al., 2022, ) use conformal prediction as a way of expressing the confidence of machine learning systems in predictions in the form of calibrated-size prediction confidence sets, this seems to my knowledge the first work to use these confidence sets to boost MCQA by iteratively narrowing down the answer choices used to prompt a model. There is a large literature of producing calibrated and accurate confidence scores along with LLM QA predictions (Tian et al. 2022, Kadavath et al. 2022, Sebastian et al. 2024). These methods rely on consistency of sampling, logit values, calibrated confidence prediction heads, and textual elicitation to compute confidence scores associated with a final answer. This is a bit of a different motivation from the conformal prediction, but employing a similar scope of techniques. Farquhar, Sebastian, et al. "Detecting hallucinations in large language models using semantic entropy." Nature 630.8017 (2024): 625-630. Kadavath, Saurav, et al. "Language models (mostly) know what they know." arXiv preprint arXiv:2207.05221 (2022). Tian, Katherine, et al. "Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback." arXiv preprint arXiv:2305.14975 (2023). Other citations are present in the paper's bibliography. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: * S1: Clear presentation and writing * S2: Motivation is strong * S3: Results are presented clearly and statistical analysis is presented well * S4: With proper input caching, it seems the re-prompt step in CROQ could be done rather efficiently, to add relatively little extra inference time. Weaknesses: * W1: CP-OPT requires a labeled finetune train step on a particular dataset, while logits do not and can be applied to any QA dataset with only a calibration step to select the conformal prediction threshold. * W2: There is little analysis on how a tuned CP-OPT head for a particular dataset/model might be used/generalized to other settings (with merely a new calibration step) to avoid retraining. I find it unlikely that it would work well between models, as the activations are different, but I wonder if it would work well between datasets. * W3: In P2, there are two different trade-off weight hyperparameters, lambda and lambda_1. I only see discussion of lambda in Appendix E. It's not clear how lambda_1 is set. * W4: The lambda parameter describes how strongly to consider the soft coverage constraints while training the CP-OPT model. In Appendix E, we see that the lambda choices for different model/dataset settings are widely different. For example, in some cases, the chosen lambda is 0.1, and in other cases it is 10. This means a few things: first, there is some additional cost and effort required to tune this hyperparameter compared to the logits procedure, which has no such hyperparameter. Second, it is unclear how this hyperparameter was tuned--was it trained with signal on the train set only? the train and calibration sets? Other Comments Or Suggestions: N/A Questions For Authors: * Q1: In equation 2, is there possibly a slight definitional issue? It seems to me that the verbal description of the threshold as "the smallest empirical quantile of the scores for the correct answers on the calibration dataset that is sufficient to satisfy (an empirical version of) the coverage property" does not quite match with the definition of the conformal sets as being everything with score greater than or equal to the threshold -- do we not then want the "largest empirical quantile that is sufficient to satisfy the coverage property?" And would the proper equation then be something more like a max {q} such that some empirical fraction of examples at or above the threshold is greater than or equal to 1 - alpha? * Q2: The tradeoff between coverage level of the pruned set and boosted performance in the revised task seems interesting to draw some conclusion about in a very simplified toy setting. With some strong toy assumptions about the behavior of the relationship depicted in Figure 1 (i.e. choosing a closed-form function that acts something like the monotonic curves shown in Figure 1), could we make any argument about what the ideal alpha would be to minimize the tradeoff between missed answers in the confidence set and improved accuracy in the second QA step? I do not think this analysis is critical for inclusion in this paper, but I am interested in it! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the thoughtful review and the noted strengths on presentation, problem motivation, empirical evaluation, and results of our paper. Our response to the queries and comments is as follows. **Input caching to make re-prompting efficient.** We thank the reviewer for the suggestion to use input caching to reduce the inference time in the second round of CROQ. Since the part of the question without the options remains the same, we can definitely cache the inference output on tokens in the question and re-use this in the next round's inference with a reduced set of options. We plan to implement and release this soon. **CP-OPT requires training a small neural network on a dataset.** As noted, logit scores can be obtained off-the-shelf from the language model. However, they can be unreliable and could be less effective for downstream use cases such as ours. To improve the quality of the scores for the application at hand, it might be necessary to tune the scores accordingly. Note that our procedure CP-OPT to tune scores is light-weight — it only requires training a small 2-layer neural network $g$ on features extracted from the LLM, and the inference overhead of $g$ is negligible compared to the LLM inference which is also needed for logit scores. Thus, while there is an extra compute cost with CP-OPT, it is insignificant in comparison to the LLM inference cost, and this cost pays off with improved performance. It also pays off with reduced set sizes, which means lower query costs. Please see the paragraph *Small conformal prediction sets reduce costs* in our response to reviewer `z4Sk`. **Re-use $g$ across models and datasets.** We appreciate the thoughts to re-use the $g$ to mitigate this minor cost of re-training it for different datasets and models. On sharing $g$ across LLMs, we agree with your view on this. Since $g$ is trained on features on features from an LLM, using it on features from another LLM will likely not work. Moreover, different datasets may differ in features and thus a $g$ trained on one dataset may not work well on another dataset. We make two final remarks on this, i) The cost of training $g$ is insignificant compared to the LLM compute costs. ii) Reuse of $g$ could be achieved for instance by incorporating multiple models and datasets while training $g$ or first learning some model invariant representations that are fed to $g$. Exploring this could be an interesting future work. **W3.** $\lambda_1$ corresponds to the weight decay hyperparameter. **Discussion on choices of hyperparameters for CP-OPT.** Yes, there is a small cost of tuning the hyperparameters. We select the hyperparameters by observing the performance on the validation data. We have included a more detailed discussion on this in the paper. **Equation (2) clarification.** Thanks for your careful attention. We believe there is some confusion due to the interpretation of scores. Some of the canonical works (Angelopoulos & Bates, 2022) in conformal prediction have used *non-conformity scores* for $g$, i.e., lower is better. In our work, we interpret $g$ as measuring *conformity*, i.e., higher is better. Thus, in our setting, the threshold will be the lowest $\alpha$ quantile of the scores, hence equation (2). **Analysis of CROQ in toy setting.** This is indeed very intriguing. We appreciate the reviewer's enthusiasm. We can characterize the accuracy gain and $\alpha$ if we assume the LLM satisfies *monotone accuracy property* as in Figure 1. Consider a predictor (LLM) that has accuracy $f(k)$ on questions with $k$ choices. It is fair to assume that as the number of choices $k$ decreases, the accuracy $f(k)$ increases, i.e. $f$ is a monotonically decreasing function of $k$. This is also confirmed in our experiments (Figure 1). We refer to this as the monotone accuracy property of the predictor. Now, let the initial number of options in the questions be $M$ and after revising them with conformal prediction (CP) the questions have $m<M$ choices and it is guaranteed by CP that the true answer is still in the $m$ choices for $1-\alpha$ fraction of the questions. Then, the gain in accuracy after CROQ is as follows, $$\text{Gain} = \text{Accuracy After} - \text{Accuracy Before}$$ The $\text{Accuracy After} = f(m)$ times the fraction of questions for which true choice is in the revised question = $f(m)(1-\alpha)$ $$\Delta(M,m,\alpha) = f(m)(1-\alpha) - f(M) = f(m) - f(M) - \alpha f(m)$$. Now we can make two claims, * If $\alpha$ is fixed, then we should see improvements whenever $f(m) > \frac{f(M)}{1-\alpha}$. * If $\alpha$ is not fixed, then the gain $\Delta(M,m,\alpha) > 0$, for any $\alpha < \frac{f(m) - f(M)}{f(m)}$. By the monotone accuracy property of the predictor $f(m) - f(M) >0$, that means any $\alpha \in (0, \frac{f(m) - f(M))}{f(m)})$ will yield a gain in accuracy. ----- We hope our response resolves the queries. We are happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: Thank you, I appreciate these responses and the additional theoretical treatment. I maintain my score and recommend acceptance. I somewhat disagree with Reviewer p4om who mentions limited scope of experiments -- in fact I find the QA experiments very reasonable, and believe agentic experiments might muddy the clean evaluation of this method against other QA methods. Re: Equation 2, I guess what I should say is that even if one does consider a conformity score, I think the minimum operator as written results in a set which _does not_ satisfy the empirical coverage property, as the empirical miscoverage rate, I imagine, should really be smaller than alpha. Do you disagree? --- Reply to Comment 1.1.1: Comment: Thanks for the reply and endorsing our empirical evaluation. **Equation 2 correction.** Thank you for raising the query on equation 2. The min and direction of the inequalities are still correct. However, we realized upon review that we were missing a correction factor, which, when included, could indeed result in the empirical miscoverage rate being smaller than $\alpha$. While the correction factor can be inserted into the current equation (2) expression, for increased clarity, we have rewritten the definition of the threshold ```{\hat{\tau}_\alpha}``` as the ```\lfloor (n+1)*\alpha \rfloor / n}``` empirical quantile of the scores from the calibration dataset. This gives the desired coverage guarantee, following the same proof technique that is in Appendix D of Angelopoulos & Bates (2022), "A Gentle Introduction to Conformal Prediction". Using their notation, we have ```\hat{\tau}_{\alpha} = s_{\lfloor (n+1)*\alpha \rfloor}```, and for any test point $(X_\text{test}, Y_\text{test})$ with corresponding conformal score $s_\text{test}$ and prediction set $\mathcal{C}(X_\text{test})$: ``` \mathbb{P}(Y_\text{test} \in \mathcal{C}(X_\text{test})) = \mathbb{P}(s_\text{test} \geq s_{\lfloor (n+1)*\alpha \rfloor}) = (n - \lfloor (n+1)*\alpha \rfloor + 1)/(n+1) \geq 1 - \alpha ```. We have included this proof in the appendix of the paper and slightly modified the text accordingly.
Summary: This paper proposes a method for improving the performance of LLMs on MCQ benchmarks using conformal prediction. The key idea is to construct an uncertainty set that contains the correct answer with high probability, prune all answer choices that are outside the set, and then present the LLM with a reduced set of choices that only lie in the set. The motivation for the paper seems to be that zero-shot performance of LLMs degrade on MCQ benchmarks as the answer choices increase. The authors show that this conformal-guided approach marginally improves accuracy of LLMs on some commonly used MCQ benchmarks. Claims And Evidence: **Main claim:** Statistical uncertainty measures can be used to reduce the set of options in MCQ benchmark items, which in turn can improve the accuracy of the LLM when operating on a pruned set of choices instead of the full set. **Evidence:** Improvements in accuracy of LLMs on the MMLU, ToolAlpaca, and TruthfulQA benchmarks. Methods And Evaluation Criteria: The paper uses standard datasets and evaluation metrics. However, the paper lacks any baselines apart from ablations of the proposed method. Theoretical Claims: The theoretical validity claim in Proposition 2.1 is a known result in the conformal prediction literature. No other theoretical claims are made by the authors. Experimental Designs Or Analyses: I checked the soundness of the experiments and I think the evaluation procedure is sensible though there is a lack of baselines for improving zero-shot MCQ performance of LLMs using methods other than uncertainty-based pruning. Supplementary Material: I have not checked the supplementary material. Relation To Broader Scientific Literature: The paper does not include new contributions in conformal prediction methodology. The backbone of the proposed method is a simple split conformal procedure, and the CP-OPT method is largely based on (Stutz et al., 2022), but with application to post-hoc features instead of end-to-end training of the LLM. The key contribution of the paper is an ad-hoc way to revise the LLM answers to MCQ questions by applying existing conformal prediction methods to construct prediction sets that filter out unlikely answers. Essential References Not Discussed: The paper covered two strands of literature: applications of conformal prediction to LLMs, and "optimizing" conformal prediction procedure. I think the paper is missing a discussion of other approaches for improving the zero-shot performance of LLMs in MCQ benchmarks, e.g. chain of thought prompting, etc. I think this is important since the goal of the paper is to improve accuracy and not quantify predictive uncertainty. Other Strengths And Weaknesses: My primary concern with this submission is that it does not address a meaningful problem. MCQ benchmarks are designed to evaluate the capabilities of LLMs in various areas, such as reasoning abilities and knowledge comprehension. However, the MCQ format itself does not represent a meaningful real-world task or practical goal. Consequently, the proposed method appears to exploit the artificial structure of MCQ benchmarks rather than genuinely enhancing the underlying capabilities of the LLM. Although the paper motivates its proposed method by emphasizing the need for accurate decision-making, this method is specifically tailored to MCQ-formatted tasks, which are not representative of real-world decision-making scenarios. Furthermore, the improvements in accuracy reported in Tables 2 and 3 appear marginal and only become noticeable when the number of choices approaches 15. Given that the methods used to construct the sets are themselves not novel, these factors collectively make the potential impact and contribution of the paper unclear. Other Comments Or Suggestions: Given the marginal gains in accuracy, I think it is important to consider other prompting-based baselines for improving the accuracy of LLMs to evaluate of the gains from pruning-based method vs. prompting-based methods. Questions For Authors: - If you are using a standard split conformal procedure, does this mean your sets are not adaptive to the conditional uncertainty given the prompt? - What realistic decision-making scenarios this method can be applies for? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the feedback. Our response to the queries is as follows, **Importance and generality of the MCQ setting.** The multi-choice question-answering framework encompasses any setting in which an LLM must select from among a finite number of options. We believe that this describes many if not most steps in `agentic workflows`. This includes selecting function calls or APIs, selecting among UI elements, selecting databases or tables in text-to-SQL settings, selecting which agent to pass an output to next, selecting the next step in a plan, selecting a document from a database for RAG, selecting a set of in-context examples from a database of such examples, selecting diagnosis codes or labels, etc. The answer options do not necessarily need to be defined in advance: in open-ended response settings, for example, it's possible for example to have an LLM generate a shortlist of initial candidates which can then be pruned before being passed to other agents. We agree that it is an interesting open question how to extend our framework to the open-ended response setting, but we believe that the MCQ abstraction is widely applicable on its own. **Comparison with prompting-based methods.** Our work's focus is to evaluate the hypothesis that conformal prediction based pruning can be effective in improving accuracy. Comparison with prompting-based methods would be interesting future work. **Other advantages of pruning answer choices.** In addition to improving accuracy, pruning answer choices can lead to computational and dollar savings. This can occur when a score function is used in the conformal procedure that is cheap to compute relative to the cost of the MCQ query. For more details, please see the section *Small conformal prediction sets reduce costs* in our response to reviewer `z4Sk`, and the section *Computational cost* in our response to reviewer `uJFE`. **On conditional uncertainty given the prompt:** In general conditional calibration is not possible in conformal prediction (Barber et al. 2020). Due to this limitation, conformal prediction with marginal coverage guarantee is widely used and it is suitable for our work where we aim to evaluate the hypothesis that conformal prediction based pruning can be effective in improving accuracy. Further, the logit scores are generated by running the forward pass of the LLM on the entire prompt including all the answer options; these scores then serve as features, along with the other features described, for the CP-OPT procedure. In that sense, both the logit scores and our CP-OPT scores represent the uncertainty conditional on the prompt, and the resulting sets reflect this uncertainty. **Regarding realistic decision-making scenarios**, our method can be applied to any setting in which an LLM must choose from among a finite set of response options, whether those options represent question answers, APIs, function calls, downstream agents which can receive input, etc. We refer the reviewer to the section *Importance and generality of the MCQ setting* above. We hope our response resolves the queries. We are happy to answer any further questions you may have. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. It could be interesting to apply this method in agentic workflows to see if it improves the availability of LLMs to pick the right actions and selection in context. I think that this work would benefit from fleshing out the real-world applications a little by including such agentic benchmarks. Currently, the experiments only show very marginal gains with no strong baselines and only in contrived MCQ settings, which makes it hard to judge the significance of the contribution. --- Reply to Comment 1.1.1: Comment: For an application in an agentic workflow, we consider the Natural Language Question to SQL (NL2SQL) task, where an LLM-based agent generates a SQL query for a user's natural language question. A component of the standard agentic workflow in this task is to first predict the relevant tables whose schema should be included in the context of the LLM, which generates the SQL query. This step is critical to decrease cost and, in some cases, is necessary when the full database schema would exceed the LLM's context limit. We consider the BIRD dataset (https://arxiv.org/pdf/2305.03111) - a large benchmark that contains 12,751 NLQ-SQL pairs across 95 databases. We filter out databases with 20 tables or more (to avoid context limit errors) and remove the retail_world databases due to inconsistent table naming. We considered the following settings: **Approach 1** - Include all table schemas in the LLM prompt. **Approach 2** - Include all table schemas for tables whose cosine similarity score is greater than a particular threshold, up to a maximum of 10 tables. The cosine similarity is taken between the embeddings of the natural language question and the table name using the OpenAI text-embedding-ada-002 model. Coverage is defined to include all tables used in the annotated ground-truth SQL query. Coverage was approximately 90%, although this was not explicitly controlled. **Approach 3** - Include tables selected using conformal prediction (CP) on CP-OPT scores. This is equivalent to the CROQ procedure, where the scores for CP are obtained from a source other than LLM. More specifically, we learn CP-OPT scores using embeddings of natural language questions and table names. We used 3412 NLQ-SQL pairs for training in approach 3, and validated on 3411 examples in approach 2 and 3. We then tested the 3 approaches on 200 NLQ-SQL pairs. We use GPT4-0613 as the LLM for SQL query generation, and report the execution accuracy, average set size, and total token cost. The results in all three settings are summarized in the table below, | | Accuracy | Avg. Set Size | Coverage | LLM Cost | | -------- | -------- | -------- | -------- | -------- | | Approach 1 | 32.0% | 7.270 | 100% | $7.10 | | Approach 2 | 29.5% | 6.405 | 88% | $6.63 | | Approach 3 (Ours) | **32.5%** | **2.685** | 92% | **$3.89** | Here, the set size means the number of tables whose schema will be included in the LLM context. Thus, lower avg. set size means fewer tables (and hence fewer tokens) in the LLM context. In the results, we see a significant reduction in the avg. set size in approach 3 while maintaining high coverage (92%). This results in a substantial reduction in the number of tokens in the LLM context, **leading to a 45% decrease in LLM cost** all while achieving slightly higher accuracy in comparison to approach 1.
null
null
null
null
Sorbet: A Neuromorphic Hardware-Compatible Transformer-Based Spiking Language Model
Accept (poster)
Summary: This paper argues that current Transformer-based SNN language models are difficult to deploy on neuromorphic chips due to the presence of softmax and layer normalization operations. To address this challenge, the authors propose Sorbet, a model that is more compatible with neuromorphic hardware. Sorbet is based on the concept of shifting and integrates novel PTsoftmax and BitShifting-based PowerNorm algorithms, which avoid complex operations like division and exponentiation. Additionally, the paper introduces techniques such as knowledge distillation and model quantization to enhance model performance and energy efficiency. Sorbet achieves comparable performance to other SNN language models on the GLUE benchmark. Claims And Evidence: Yes. The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. The proposed Sorbet and evaluation criteria make sense for the problem. Theoretical Claims: Yes. The theoretical claims include 1. bit-shifting-based step maintains PowerNorm's gradient boundness 2. the approximation error of PTsoftmax remains within a constant factor of the traditional softmax. In my opinion, both of them are clear and there are no obvious errors. Moreover, I am concerned about the credibility of Assumption B.1. Line 651 states, "In practical scenarios, the activations in Transformer-based SNNs are typically non-trivial, ensuring that their L1 norm remains above 1." However, in Figure 1 (c) and (d), the input X of of the BSPN module is not binary (0-1). I hope the authors can provide a more detailed explanation, whether through theoretical analysis or experimental evidence. Experimental Designs Or Analyses: I checked the soundness/validty of the experimantal designs and analyses. The main results, energy saving analysis and ablation study are necessary due to the claim of energy efficiency and competitive performance. However, I believe that the experiments and explanations should be further supplemented. Overall, the proposed method integrates module design, quantization, distillation, and spiking neurons, making the training process relatively complex. Therefore, the authors should clearly describe the experimental setup of these components and provide sufficient ablation studies; otherwise, readers may find it confusing. I would like to raise the following questions regarding the experiments and hope the authors can address them and revise the paper accordingly. This is important for me to reconsider the rating. 1. The experiments are based on the BERT model, which consists of two stages: pre-training and fine-tuning. However, the paper does not explicitly mention these two critical stages, even in Supplementary A's Algorithm 4. This makes it unclear whether Algorithm 4 entirely pertains to the pre-training phase. For example, when is the spiking neuron introduced—right after pre-training, or only after fine-tuning? Additionally, does the fine-tuning phase still involve knowledge distillation? 2. In Line 306(left column), who is the teacher model? Is it BERT_base? 3. In Algorithm 4, why is knowledge distillation conducted in three steps? Is there any literature supporting this approach? What are the hyperparameters (e.g., learning rate, epochs) for each step of distillation? Unfortunately, I could not find any details on experimental hyperparameters in the paper. 4. Since Sorbet is ultimately an SNN model with activations taking only 0 or 1, why is 4-bit activation quantization performed first? Would it be possible to skip activation quantization and directly convert the model to an SNN? If activation quantization is necessary, please provide ablation studies to support its significance. 5. In Table 2, two versions of Sorbet are listed. What are the differences between them in terms of weight and activation quantization? In Line 344, does "a power of two" refer to the 1-bit weight quantization mentioned in Line 260 (right column)? If so, what is the relationship between the Sorbet models in Lines 340 and 341? 6. In Table 4, does "Bits" refer to weight quantization or activation quantization? In Line 423 (left column), the authors state that "the accuracy drop from full-precision BERT to Sorbet is mainly caused by the quantization of weight and spike generation process, not by the replacement of softmax and normalization." This may suggest that Table 4 refers to weight quantization. However, given that the model also uses 4-bit activation quantization, what is its impact on performance? Additionally, how does the spike generation process affect the model’s accuracy? I could not find any ablation study on introducing spiking neurons. 7. In Line 134 (left column), the authors adopt a novel ASG model instead of the traditional IF model, arguing that "ASG accumulates input spikes and calculates membrane potential in a single pass, requiring only one access to the weights." However, based on Algorithm 3, it seems that the IF model could also load weights in a single pass by computing the summed inputs for t from 1 to T, similar to Step 3, but without averaging as done in Step 4. The authors' reasoning here might be misleading. Furthermore, I would like to see ablation studies on how different spiking neuron models affect performance. Supplementary Material: Yes. I reviewed Supplementary material A,B,C,D. Relation To Broader Scientific Literature: A key computational characteristic of SNNs is sparse operations, primarily based on addition, which has been demonstrated in models like SpikFormer and Spike-driven Transformer. Recently, there has been a surge of work on Transformer-based SNN language models, focusing on the critical challenge of handling the Attention mechanism. Some approaches, like SpikeBERT, attempt to discard the Softmax operation similarly to SpikFormer; however, doing so in language tasks often leads to poor performance unless distillation techniques are applied. On the other hand, SpikeLM retains the Softmax operation. This work explores an alternative approach—constructing an Attention mechanism using a shift-based method while preserving a Softmax-like structure, making it a valuable contribution to the field. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. In the section of Spiking Neural Networks within the "Preliminary" chapter, the author mentions terms such as "surrogate gradients" and "ANN-to-SNN" without providing explanations or references, which may lead to confusion. Furthermore, which specific method does the SNN in this paper employ? 2. In Line 810, The energy cost are from (). Maybe the content in () is overlooked. 3. When I checked the code in Suplementary material E, I found the repository is expired. Questions For Authors: 1. What is the relationship between k and \psi_B in Algorithm 1? 2. What is the proportion of energy consumption reduction contributed by efficient methods including BSPN, PTSoftmax, quantization, and spiking neurons? Which one plays the primary role? 3. The seven questions of experiment mentioned in Experimental Designs Or Analyses, which are important. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable suggestions. We hope our response below can address your concerns. --- > Q1 & 2: Pretraining, fine-tuning, and distillation We use a pre-trained and fine-tuned BERT from HuggingFace as a starting point. We then applied distillation and quantization techniques, as in Algorithm 4, without further fine-tuning. After these, the weights are directly converted into a corresponding SNN. This part is not the main contribution of Sorbet. **In our Sorbet experiments, the teacher model refers to BERT_base**. We will clarify this in the updated version. --- > Q3: Multi-step distillation(MSD) The MSD method we use is based on BiT[1]. Section 5.5 of that paper showed that MSD yields better performance. MSD is also adopted in other recent works[2]. We performed distillation in three steps because we applied three major modifications to the model before SNN conversion. [1] Liu et al. Bit: Robustly binarized multi-distilled transformer. Neurips, 2022. [2] Han et al. Amd: Automatic multi-step distillation of large-scale vision models.ECCV, 2024. We will add the hyperparameters in appendix. We included them in the same link of our code. --- > Q4: Skipping activation quantization Activation quantization is essential for SNNs from ANN-to-SNN conversion to reduce timestep and maintain accuracies. A quantized ANN can resemble an SNN, allowing the model to learn how to represent quantized values in the equivalent SNN. Recent studies have also proposed lossless conversion methods through activation quantization[3]. [3] Shen et al. Are conventional snns really efficient? A perspective from network quantization. CVPR 2024. As an ablation study, we convert a full-precision BERT into an SNN with T=16. The model achieved only 50.92 accuracy on the SST-2 dataset, indicating a failure to represent the data. --- > Q5: Two versions of Sorbet in Table 2 The two versions of Sorbet only differ in their **quantization strategies for BSPN**. For the Sorbet$\ddagger$(line 340), we quantize BSPN’s weight to powers of two. These weights refer to the scaling factor $\frac{\gamma}{\psi}$ in line 15 of Algorithm 1. We will clarify this in our revised manuscript. --- > Q6: 'Bits' in Table 4 In Table 4, 'Bits' refers to the quantization bit-width of activations. For 'Bits=4', weights are 1-bit and activations are 4-bit. For 'Bits=1', both weights and activations are 1-bit. As quantizing the activations is necessary to realize the spiking neurons, the energy-saving impact of quantization and spiking cannot be evaluated separately. However, the loss due to spiking neurons after quantization can be measured as follows: | Activation Bits/Timestep | 1/2 | 2/4 | 3/8 | 4/16 | | --- | ---- | ---- | ---- | ---- | | Acc (Quantized ANN) | 79.8 | 87.9 | 90.1 | 90.9 | | Acc (SNN) | 78.7 | 87.4 | 89.3 | 90.4 | | Loss | 1.1 | 0.5 | 0.8 | 0.5 | --- > Q7: Why ASG & ablation for spike neuron The traditional IF model loads weights, computes membrane potentials, and generates spikes at every timestep, thus requiring multiple reads of the weights. Algorithmically, the IF model can be optimized by first summing inputs across all $T$ timesteps to reduce repeated weight accesses. However, this optimization would require extra storage for membrane potentials at each timestep. ASG avoids storing intermediate data. Furthermore, we conducted an ablation study on the SST-2 dataset. For T=2, ASG achieves 78.7% accuracy, while IF achieves 57.1%. For T=4, ASG reaches 87.4%, while IF is at 79.7%. These results demonstrate that ASG outperforms the IF model, further justifying our choice of ASG. --- > Q8: Other suggestions We will add the explanation in preliminary: Surrogate gradients approximate the gradients during backpropagation, enabling SNN training despite their discrete nature. These gradients smooth out non-differentiable spike events. ANN-to-SNN conversion involves transforming a trained ANN model into an SNN by mimicking ANN neuron behavior with the spike. **In Sorbet, we use ANN-to-SNN conversion.** We will fix the missing reference in the appendix. The extended repository and code are now available at the same link as in the manuscript. --- > Q9: k and \psi_B in Algorithm 1 Thank you for pointing out. There is a mistake in line 10, Algorithm 1: $$\sigma_B^2 \gets \frac{1}{B}\sum_{i=1}^{B}\mathbf{X}_i^2$$ should be $$\psi_B^2 \gets \frac{1}{B}\sum_{i=1}^{B}\mathbf{X}_i^2$$ We hope that clarifies. --- > Q10: Energy saving proportion SNNs are energy efficient because of their sparsity and low-bitwidth operations. Quantization and spiking neurons contribute to more than 99% of the energy-saving. Even though BSPN and PTSoftmax by themselves do not consume much energy, without them, it is not possible to realize true SNNs and the benefits that SNNs bring. --- > Q11: Explanation of Assumption B.1 The plot for the distribution of the input of normalization can be found in the code repository. --- Rebuttal Comment 1.1: Comment: Thank you very much for your patient reply. Although some of the responses lack detail, I understand the challenge of addressing so many questions within the character limit. Overall, the work presented in this paper is solid and could help stimulate more discussion in the field of SNNs. I have decided to raise my score. Since the answers to these questions are somewhat brief and scattered, I sincerely hope the authors can provide a clearer description of the experimental procedures, necessary explanations and references in future revisions to enhance readability. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback on our work and for taking the time to provide such constructive comments. We would like to make additional clarifications on several key issues. Due to time and space constraints here, we will incorporate your suggestions and provide a more detailed and systematic description of the experimental procedures in our revised manuscript. --- > Q1 & 2 Extension: Pretraining, fine-tuning, and distillation As we mentioned in our previous response, our starting point is already fine-tuned with the target dataset. We have added the download links in the code repository. To boost the energy efficiency of our model and enable the encoding of all activations into spike trains, we quantize all weights to 1-bit and activations to 4-bits. This step adopts the model distillation method detailed in [1]. With the incorporation of BSPN and PTsoftmax, the revised model is treated as a student model. After distillation, the weights will be fixed and transferred to an SNN directly. [1] Liu Z, Oguz B, Pappu A, et al. Bit: Robustly binarized multi-distilled transformer. NeurIPS, 2022. --- > Q3 Extension: Training hyperparameters We will add the training details in our appendix to provide a clear path for reproducing our work. Specifically, the parameters for each step vary from task to task as follows: | **Dataset** | **Epochs** | **Max Seq Length** | **Batch Size** | **Learning Rate** | |-------------|----------------------|--------------------|----------------|-------------------| | MNLI | 100 | 128 | 120 | 1e-5 | | MRPC | 100 | 128 | 40 | 1e-6 | | SST-2 | 200 | 64 | 180 | 1e-6 | | STS-B | 200 | 128 | 30 | 5e-7 | | QQP | 150 | 128 | 100 | 1e-5 | | QNLI | 150 | 128 | 80 | 1e-6 | | RTE | 100 | 128 | 10 | 5e-6 | --- > Q4 Extension: Activation Quantization A quantized ANN can be deemed equivalent to an SNN because by quantizing activations in an ANN, continuous outputs are transformed into discrete signals that closely resemble the threshold-based spiking mechanism inherent in SNNs. In SNNs, neurons fire only when their membrane potential exceeds a specific threshold, producing binary outputs. Similarly, quantization in ANNs acts as a filter, suppressing sub-threshold activations and preserving only those above a defined cutoff, thus effectively mimicking the discrete, event-driven behavior of spiking neurons. We can also offer other references to show that performing activation quantization before conversion is mainstream, namely [2] and [3]. [2] Bu T, Fang W, Ding J, et al. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. ICLR 2023. [3] Hu Y, Zheng Q, Jiang X, et al. Fast-SNN: Fast spiking neural network by converting quantized ANN. IEEE TPAMI, 2023. --- > Q8 Extension: Preliminary for SNN As mentioned in the paper, we either directly train an SNN or convert an ANN into an SNN. We adopted the latter in this paper because it has been shown to converge faster as well as produce better accuracies when compared to the former [4, 5]. Direct training of SNNs requires surrogate gradients that not only introduce additional design and tuning challenges but also have issues related to gradient approximation and delay, making the training process more complex and prone to getting stuck in local optima. Secondly, by converting a well-trained ANN to an SNN, we can theoretically maintain performance levels comparable to those of the original ANN, whereas training SNNs directly with surrogate gradients may result in more significant performance fluctuations due to approximation errors and instability. [4] Jiang H, Anumasa S, De Masi G, et al. A unified optimization framework of ANN-SNN conversion: towards optimal mapping from activation values to firing rates. ICML, 2023. [5] Huang Z, Shi X, Hao Z, et al. Towards High-performance Spiking Transformers from ANN to SNN Conversion. ACM MM, 2024.
Summary: The authors propose Sorbet: A transformer-based spiking language model optimized for neuromorphic hardware, enhancing energy efficiency while maintaining strong performance. It introduces BitShifting-based PowerNorm (BSPN) for normalization and Power-of-Two softmax (PTsoftmax) as a hardware-friendly alternative to softmax. Through binary weight quantization via knowledge distillation, Sorbet achieves 27.16x energy savings over BERT and 3.16x compared to SpikeLM while remaining competitive on the GLUE benchmark. Claims And Evidence: Yes, the claims appear well-supported by the experimental results and visualizations. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, like theorems have proof in the appendix section. I read them and seemed justified enough. Experimental Designs Or Analyses: Yes, the experiments look sound. Like, the paper evaluates Sorbet against multiple SOTA methods and includes ablation studies to highlight the effectiveness of the PTsoftmax and BSPN modules. Supplementary Material: The supplementary material includes a repository link for the code, but when I accessed it, the repository was no longer available, indicating it has expired. Relation To Broader Scientific Literature: The paper contributes within the domains of small language models for edge devices, SNNs for energy efficiency, and transformer-based SNNs for NLP tasks. It also builds on research in quantized BERT models and simplified transformer architectures. The authors acknowledge prior work in these areas and present Sorbet as a solution to key limitations in existing approaches, particularly for NLP tasks on neuromorphic hardware. Essential References Not Discussed: I believe the authors have covered a range of relevant references related to SNNs, transformer-based models, and quantization techniques. Other Strengths And Weaknesses: Strengths:- 1.) The paper is well-written and articulated. 2.) The authors propose the first transformer-based spiking language model that removes softmax and Layer Normalization. 3.) PTsoftmax and BSPN are proposed to replace softmax and layer normalization, using bit-shifting instead of costly operations, making Sorbet more efficient for neuromorphic hardware. 4.) They propose Sorbet, a binary spiking language model derived from BERT, integrating full quantization and design refinements to enable low-power, high-efficiency inference comparable to ANN models. 5.) A broad range of experiments (multiple datasets, models, baselines, and ablations) are conducted, and their experimental results demonstrate the effectiveness of Sorbet over existing methods. It achieves 27.16x energy savings over BERT and 3.16x over SpikeLM while maintaining stable performance on the GLUE benchmark. Weaknesses:- 1.) The authors did the theoretical energy estimation of SNN architectures rather than empirical validation. The efficacy of their approach (PTsoftmax and BSPN) would have been more realized if the authors had deployed the converted SNN on neuromorphic hardware —such as Intel Loihi, TrueNorth, or BrainChip Akida —and provided measured power consumption data. 2.) While SNNs offer low power consumption, they inherently introduce additional latency due to spike processing over T timesteps. Although the proposed Sobret is made quantized and more efficient for neuromorphic accelerator, it still results in higher latency compared to ANNs. For real-world applications where inference time is critical, it remains unclear how SNNs can effectively address this challenge. A quantitative latency analysis, particularly in time-sensitive scenarios, would have been valuable. Additionally, evaluating the inference time could have provided deeper insights into the Sobret’s practical impact. 3.) Hardware Dependency: The practical deployment of SNN models rely on neuromorphic hardware, which seems limited in accessibility and widespread adoption. The authors should have addressed this limitation in their study. 4.) A discussion of the potential limitations of the Sorbet approach would enhance the paper's credibility. Other Comments Or Suggestions: Correct the orientation of captions of Figs. 2 and 3. Questions For Authors: I would like all points under weaknesses to be addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable suggestions. Below are our responses, which we hope will address your concerns. --- > Q1: Deployment of SNNs on neuromorphic hardware We have evaluated the hardware compatibility with the Lava framework to simulate Loihi chip. However, the platform does not provide the energy cost. So we independently implemented our designs of the two functions in Verilog and tested power consumption using a commercial 22nm FD-SOI technology process and reported in the manuscript. The results indicate that our proposed functions PTsoftmax and BSPN achieve approximately 27.63x and 12.4x better energy efficiency compared to conventional implementations. --- > Q2: Latency and inference time A direct comparison between ANNs and SNNs on the issue of latency is not straightforward. It will depend on model and hardware parameters, as well as circuit implementation and optimization. Also, below certain thresholds required by the application, it becomes a non-issue. All things being equal, the latency of SNNs increases with the timestep. We experimented with setting the Sorbet timestep to 2 and achieved an accuracy of 78.7 on the SST-2 dataset, demonstrating the robustness of Sorbet. Compared to an ANN, even using the same model size, the hardware circuits of an SNN would be simpler than an ANN. This would translate to a higher frequency. While it may take a few cycles for an SNN to produce what an ANN can do in a single cycle, because of the higher frequency, the SNN may still perform better on the end-to-end latency, or at least be able to satisfy the application's requirements. --- > Q3: Hardware Dependency While SNNs are still not widely used, many companies are actively exploring neuromorphic hardware such as Intel, IBM, BrainChip, Qualcomm, and so on, reflecting the strong interest and potential in this emerging field. We believe that once SNN models can achieve comparable performance of advanced ANNs and be efficiently deployed on neuromorphic hardware using techniques such as what is proposed here, these chips will be more widely adopted as viable alternatives to overcome the power and latency limitations of traditional digital computing. --- > Q4: Limitation We will add a discussion section on the limitations of Sorbet. Sorbet is designed for SNN deployment specifically on edge devices. Sorbet optimizes the model with constrained computational resources. Hence, it probably will not outperform larger, more complex models in environments without resource constraints. --- > Q5: Figure Caption Thanks for pointing this out, we will fix it in our paper. We have extended the repository, and the code is available at the same link provided in the manuscript.
Summary: This paper proposed a Spiking Transformer language model, named Sorbet, designed for neuromorphic hardware. Sorbet introduces two approximations, PTsoftmax and BSPN, to replace traditional softmax and layer-wise normalisation. The aim of them is to make the model neuromorphic compatible and energy-efficient. PTsoftmax replaces softmax's exponential and division operations with bit-shifting. BSPN approximates the L1 norm to the nearest power of two, avoiding square and square root operations. Sorbet also integrates binary weight quantization with knowledge distillation to further reduce the model's computational cost. On the GLUE benchmark, Sorbet can achieve competitive performance, yet with much reduced energy consumption. Claims And Evidence: One of the key claims of this study is a neuromorphic hardware compatible model. Hence evidence provided should have a hardware focus, which is not the case in the paper. Can the model be evaluated on actual chips, such as Loihi, IBM TrueNorth, or NeuroGrid? It is more important than showing high efficiency. The choice for baseline methods needs a bit update to substantiate the claim of high performance high efficiency of Sorbet. There are some efficient models like DistilBERT [1] or TinyBERT [2] which are designed for edge devices. How would Sorbet compare in terms of energy efficiency and performance trade-offs against these efficient models? In addition, recent spiking transformers should be considered in the comparison, such as spikeBERT [3]. Also ANN to SNN conversion methods should be considered as well such as QCFS [4]. SpikeGPT is mentioned in the paper, but not included in the comparison. [1] Sanh et al., “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter”, NeurIPS 2019. [2] Jiao et al., “TinyBERT: Distilling BERT for Natural Language Understanding”, EMNLP 2020. [3] Lv et al., Spikebert: A language spikformer learned from bert with knowledge distillation, AAAI 2024 [4] Bu, et al. Optimal ann-snn conversion for high-accuracy and ultra-low-latency spiking neural networks, ICLR 2022. In Table 2, the performance of SpikingBERT is list as: 83.8 75.4 86.7 80.5 - 75.8 - on the seven GLUE datasets. However the original paper reported better results: 86.82 78.10 88.19 85.20 66.06 79.17/85.15 82.20/81.90 respectively. Why is there such discrepancy? The results from SpikeLM are also different to that on Xing's paper. Another point is the time step. It is not mentioned in the main results on Table 2. Does Sorbet fix it to 16? If so, what would happen for a different timestep? Methods And Evaluation Criteria: The proposed approximation methods, PTsoftmax and BSPN, are interesting and promising. In this sense, this study may lead to a practical way to replace energy-intensive operations in Transformers, which could be a critical step for enabling language models on neuromorphic hardware. The evaluation is performed on the GLUE benchmark, comparing Sorbet's performance with several baselines, including quantised models and other SNN-based language models. This study is solid in this regard. Theoretical Claims: It is nice to see theoretical proofs in the paper. The analysis confirms that BSPN maintains bounded gradients, making it a robust and efficient alternative to LN. Also, although PTsoftmax does not strictly sum to 1, the analysis shows that this discrepancy has a minor impact on performance. In addition, the ablation studies confirm that the use of PTsoftmax and BSPN introduces minimal performance degradation. Experimental Designs Or Analyses: The paper’s experimental design and analyses appear to be sound and comprehensive. The comparison is performed using the well established GLUE benchmark. Several baselines, e.g., BERT, SpikeLM, and other quantized models, are involved. The energy saving analysis and theoretical proof are included in the paper. In addition, ablation studies are performed to show the contributions of PTsoftmax and BSPN. Supplementary Material: The source code provided is not accessible https://anonymous.4open.science/r/Sorbet Relation To Broader Scientific Literature: This work could contribute towards high-performance transformers designed for neuromorphic hardware, offering theory and practice of efficient neural network design. Essential References Not Discussed: See the claims section. More references should be added. Other Strengths And Weaknesses: See above Other Comments Or Suggestions: SpikeLM by Xing et al., "SpikeLM: Towards general spike-driven language modeling via elastic bi-spiking mechanisms" is published in ICML 2024, not just an arXiv article. Figure 3. is not necessary, a simple table would be clear and more concise. Two latex problems in Table 1 caption, '+' --> `+' Line 370 Where -> where Questions For Authors: How to use Sorbet as a multimodal LLM? That is an important extension to be considered. "27.16× energy savings compared to BERT" was mentioned multiple times. How was it calculated? Also see questions in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable suggestions. Below are our responses, which we hope will address your concerns. --- > Q1: Hardware focus and evaluation on actual chips We appreciate your suggestions. To demonstrate the neuromorphic hardware compatibility of our proposed model, we have implemented and validated the PTsoftmax and BSPN layers using the Lava framework, targeting Intel's Loihi architecture. We created an AbstractProcess along with its corresponding PyLoihiProcessModel, deployed within Lava's simulation environment Loihi1SimCfg. We do not have access to physical neuromorphic chips. However, beyond the simulation, we implemented our design in Verilog and evaluated its power consumption using a commercial 22nm FD-SOI technology process. The results shows that PTsoftmax and BSPN achieve approximately $27.63\times$ and $12.4\times$ better energy efficiency than conventional operations. We will include the result into our final version. --- > Q2: Compare with more baselines Thank you for the suggestion, we have added the following comparison and will include them in our result section. In terms of FLOPs, TinyBERT\_6 and DistilBERT\_4 reduce energy by 2.0× and 3.0×, respectively, compared to BERT, while Sorbet achieves a remarkable 27.16× reduction: | Model | Size(MB) | QQP | MNLI-m | SST-2 | QNLI | RTE | MRPC | STS-B | | ---------- | -------- | ---- | ------ | ----- | ---- | ---- | ---- | ----- | | BERT_base | 418 | 91.3 | 84.7 | 93.3 | 91.7 | 72.6 | 88.2 | 89.4 | | DistilBERT | 207 | 88.5 | 82.2 | 91.3 | 89.2 | 59.9 | 87.5 | 86.9 | | TinyBERT_6 | 207 | - | 84.6 | 93.1 | 90.4 | 70.0 | 87.3 | 83.7 | | Sorbet | 13.4 | 83.4 | 75.8 | 89.6 | 84.6 | 59.2 | 78.4 | 73.6 | We did not include SpikeBERT and SpikeGPT because they used different datasets except the SST-2. A comparison of these two models using SST-2 would be: | Model | Size(MB) | Energy(mJ) | Acc | | ---------- | -------- | ---------- | ---- | | SpikeGPT | 216 | - | 88.8 | | SpikeBERT | - | 28.54 | 85.4 | | **Sorbet** | 13.4 | 0.56 | 89.6 | Regarding ANN to SNN conversion methods, we performed the conversion after quantizing activations to 4 bits. This approach aligns with the mainstream ANN-to-SNN conversion techniques, such as QCFS [1] and [2]. The core idea behind [1] and [2] is to clip and quantize the activations to make the ANN model behave more like an SNN, thereby minimizing conversion loss. Our work follows this approach, incorporating advanced multi-step distillation inspired by [3] to obtain a quantized model with improved performance. [1] Bu T, Fang W, Ding J, et al. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks[C]. ICLR, 2023. [2] Shen G, Zhao D, Li T, et al. Are conventional snns really efficient? a perspective from network quantization[C]. CVPR, 2024. [3] Liu Z, Oguz B, Pappu A, et al. Bit: Robustly binarized multi-distilled transformer[J]. Neurips, 2022. --- > Q3: Results in Table 2 In Table 2, we reported 2 results for SpikingBERT and SpikeLM. The results of their original paper are in lines 336 and 337. We noticed they further quantized their models to 1-bit ( SpikeLM reported in their original papers, while SpikingBERT reported separately in [4]). To make a fair comparison, we included quantized results in lines 338 and 339, denoted as 1-bit models. [4] Bal M, Jiang Y, Sengupta A. Exploring Extreme Quantization in Spiking Language Models[C]// ICONS, 2024. --- > Q4: Ablation for different timesteps The timestep used for all results reported is 16. We will make this explicit in the paper. We performed an ablation study for timesteps on SST-2 dataset: | Timestep | 2 | 4 | 8 | 16 | | ------ | ---- | ---- | ---- | ---- | | Accuracy | 78.7 | 87.4 | 89.3 | 90.4 | --- > Q5: Small typos and code accessibility Thank you for pointing out. We fix them and take your suggestion to replace Figure 3 with a table. The repository had expired, but we have extended it. The code is available now in the same link provided in the manuscript. --- > Q6: Use Sorbet as a multimodal LLM The components we have designed can be used on any model that uses the transformer mechanism. Therefore, it should be easily extendable to multimodal LLMs with customized training processes. --- > Q7: Calculation of energy saving We calculate our energy savings as: $$ N_{\text{saving}} = \frac{E_{\text{BERT}}}{E_{\text{Sorbet}}} = \frac{15.21}{0.56} = 27.16 $$ We use 15.21mJ for FP16 BERT_base from SpikeLM (Xing et. al, 2024). For $E_\text{Sorbet}$, as in Appendix D.1: $$ E_{\text{Sorbet}} = T \cdot r \cdot E_{\text{BERT}} \cdot \frac{E_{AC}}{E_{MAC}}. $$ where $r$ is $0.13$ and $T$ is $16$. When using ${E_{MAC}} = 4.6pJ$, the $E_{\text{BERT}}$ should be FP32 BERT as 51.41mJ. Our operations are essential for SNNs but contribute less to energy savings, so we excluded them from the full model evaluation. --- Rebuttal Comment 1.1: Comment: Appreciate the additional experiments and explanation. I have raised the score. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal and for updating the review!
null
null
null
null
null
null
null
null
De-mark: Watermark Removal in Large Language Models
Accept (poster)
Summary: DE-MARK presents a framework for removing n-gram-based watermarks, specifically targeting the soft watermarking scheme proposed by Kirchenbauer et al. (2023a). The method utilizes a novel querying strategy called "random selection probing" to estimate watermark parameters like strength and red-green lists. The paper claims theoretical guarantees for distribution preservation and demonstrates the efficacy of DE-MARK on models like Llama3 and ChatGPT in watermark removal and exploitation tasks. The core contribution is a practical approach to reverse-engineer and remove a specific type of statistical watermark without prior knowledge of watermark parameters. Claims And Evidence: The contribution of this work appears limited. Firstly, the attack is primarily focused on a specific soft watermark (Kirchenbauer et al., 2023a). This type of watermark, as demonstrated by Gu et al. (2024) in "On the Learnability of Watermarks for Language Models" (ICLR 2024), is known to be relatively easily learned. While the paper introduces refined techniques to estimate the hyperparameters of this soft watermark, the overall contribution in addressing a fundamentally weak watermark is arguably incremental. Secondly, the paper's scope does not extend to other watermarking methods, such as advanced cryptographic watermarks, which are designed to be theoretically resistant to parameter learning. Examples like "Undetectable watermarks for language models" highlight this gap. Furthermore, the second contribution regarding industry-scale applicability is questionable. It is unknown whether ChatGPT employs watermarking, and if so, which specific technique. Simulating a soft watermark on top-20 tokens and demonstrating its removal with DE-MARK does not convincingly demonstrate effectiveness against a real-world, industry-level watermarking implementation. Therefore, it is recommended that the authors revise the title to accurately reflect the limited scope of their method, specifically its applicability to soft n-gram watermarks. The current title is overly broad and implies a capability to remove a wider range of watermarks than is actually demonstrated. Methods And Evaluation Criteria: As state in "Claims And Evidence" Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Related Essential References Not Discussed: Gu et al. (2024) "On the Learnability of Watermarks for Language Models" (ICLR 2024) Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback, which has been invaluable in refining our manuscript. Below, we provide detailed responses to each of your comments: > Q1: Firstly, the attack is primarily focused on a specific soft watermark (Kirchenbauer et al., 2023a). This type of watermark, as demonstrated by Gu et al. (2024) in "On the Learnability of Watermarks for Language Models" (ICLR 2024), is known to be relatively easily learned. While the paper introduces refined techniques to estimate the hyperparameters of this soft watermark, the overall contribution in addressing a fundamentally weak watermark is arguably incremental. Secondly, the paper's scope does not extend to other watermarking methods, such as advanced cryptographic watermarks, which are designed to be theoretically resistant to parameter learning. Examples like "Undetectable watermarks for language models" highlight this gap. A1: Our method is specifically designed for n-gram-based approaches, which are widely employed in watermarking[1,2,3,4]. Consequently, it can be naturally extended to other n-gram watermarking methods. To support our claim, we extend our method on two additional n-gram-based advanced distortion-free watermarking: $\gamma$-reweight[3] and DiPmark[4], the results are presented in Table 13 and 14 in this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub), which demonstrate Demark’s strong generalization capabilities beyond just the KGW framework. We are not able to evaluate our attack method on Undetectable watermarks because: a) it is not n-gram-based, b) the vocabulary size of the LM in their paper is only 2, and it is not trivial to extend their method to models with larger vocabulary size c) it’s a pure theoretical paper without any code implementation. > Q2: Furthermore, the second contribution regarding industry-scale applicability is questionable. It is unknown whether ChatGPT employs watermarking, and if so, which specific technique. Simulating a soft watermark on top-20 tokens and demonstrating its removal with DE-MARK does not convincingly demonstrate effectiveness against a real-world, industry-level watermarking implementation. A2: Whether ChatGPT employs a watermark by default is irrelevant to our study. In our experiments, we manually embedded a soft watermark into ChatGPT outputs and successfully removed it using Demark. Moreover, as noted in various prior works [1,2,3,4], the choice of LLM has minimal impact on the effectiveness of watermarking techniques. Our extensive results across LLaMA 3B, LLaMA 8B, and Mistral 7B further confirm that Demark is robust to the choice of underlying LLM. Finally, both Reviewer 5v1A and eS2n acknowledged that our case study on ChatGPT is informative and valuable. > Q3: Therefore, it is recommended that the authors revise the title to accurately reflect the limited scope of their method, specifically its applicability to soft n-gram watermarks. The current title is overly broad and implies a capability to remove a wider range of watermarks than is actually demonstrated. A3: Our method is designed for n-gram-based approaches and is applicable beyond just soft n-gram watermarks, as demonstrated by our experiments in Tables 13 and 14, available in this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub). In response to the reviewer's recommendation, we will revise the title to emphasize our focus on n-gram watermarking. We would be more than happy to further discuss any additional questions or concerns you may have. [1] A watermark for large language models, Kirchenbauer et al., ICML 2023 [2] On the reliability of watermarks for large language models, Kirchenbauer et al., ICRL 2024 [3] Unbiased Watermark for Large Language Models, Hu et al., ICLR 2024 [4] DiPmark: A Stealthy, Efficient and Resilient Watermark for Large Language Models, Wu et al., ICML 2024
Summary: ## Summary This paper addresses critical vulnerabilities in n-gram watermarking schemes for language models (LMs). The authors propose **DE-MARK**, a framework for watermark removal and exploitation, with three key contributions: 1. **Watermark Parameter Estimation** Introduces _random selection probing_ to reconstruct red-green token lists and estimate watermark strength \\(\delta\\) without prior knowledge of hash functions or n-gram parameters. Core algorithms include: - Relative probability ratio matrix - Context length detection via score consistency - Provable unbiased \\(\delta\\) estimation 2. **Distribution-Preserving Removal** Formulates watermark removal as probability reweighting Theoretical guarantees bound the total variation distance between post-removal (\\(P_R\\)) and original (\\(P\\)) distributions 3. **Empirical Validation** Tests on Llama3, Mistral, and ChatGPT show: - **Removal**: Reduces TPR@FPR=0.1% from >89% to <15% while preserving GPT quality scores (\\(\Delta<0.5\%\\)) - **Exploitation**: Achieves 93% TPR@FPR=0.1% when implanting stolen watermarks - **Black-box Adaptation**: 85% precision in red-green list detection with 10 samples The work demonstrates fundamental limitations in current n-gram watermark robustness and provides tools for security auditing of LM watermarking systems. ## Update after Rebuttal I found that the authors have adequately solved most of my concerns, and I recognize the contribution of the paper and would keep my rating as "weak accept". Claims And Evidence: **Well-supported claims:** The paper provides detailed descriptions of the DE-MARK framework through five well-formulated algorithms covering relative probability calculation, token scoring, n-gram length identification, watermark strength estimation, and green list identification. DE-MARK significantly reduces watermark detectability (TPR@FPR) from high levels (>60%) to low levels (<20%) while maintaining text quality as measured by GPT scores. **Weaknesses in evidence:** The experimental results don't include variance measures or statistical significance tests across multiple runs. Methods And Evaluation Criteria: The paper's approach is methodologically sound: - Random selection probing efficiently estimates token probabilities - Five sequential algorithms form a coherent framework for watermark removal - Theoretical guarantees establish bounds on distribution gaps ## Evaluation Criteria Assessment - Diverse models (Llama3.1-8B, Mistral-7B, Llama3.2-3B) and datasets (Dolly, MMW) - Appropriate metrics (TPR@FPR, p-values, GPT scores) **Weaknesses:** - Missing computational efficiency analysis - Lack of statistical significance testing Theoretical Claims: I have checked the theoretical claims of the paper, especially theorems in section 4, which are sound. Experimental Designs Or Analyses: The experiment settings and dataset usage are sound but need computational efficiency analysis and statistical significance testing. Supplementary Material: I reviewed the supplementary material, especially parts E and F, the proofs of theorems in section 4. Relation To Broader Scientific Literature: The contribution of this paper is unique compared to broader scientific literature, which are introduced in the related work part. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The setting of this paper is strong enough, detector-free scenarios are genuinely in the real world. The theorems and proofs are sound, making the paper good enough. Weaknesses: I would like to see the analysis of the number of queries. Some formulas need to be polished in formats. Some references should be referred to correctly, e.g., `Watermark stealing in large language models` should be in ICML. In section 4, the authors should point out that detailed proofs of theorems are in Appendix XXX. Other Comments Or Suggestions: I have known that the setting of this method is stronger than priors, but I also want to see the performance comparisons with others, e.g., `Attacking llm watermarks by exploiting their strengths`, which could make this paper more sound. Questions For Authors: In section 5.2, why only consider $h=\{3,7\}$? In normal practices, $h=2$ is normally used, and I would like to see a detailed comparison between the selection of $h$. Theorem 4.2 provides bounds on distribution gaps, but there's no experimental validation of how tight these bounds are in practice. Can you provide empirical measurements of actual distribution gaps compared to the theoretical bounds? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and thoughtful suggestions, which have greatly helped improve the quality of our work. > Q1 The experimental results don't include variance measures or statistical significance tests across multiple runs. A1: Firstly, evaluating whether a sentence is watermarked does not require running multiple statistical significance tests because the watermark detection algorithm itself is deterministic and already employs a statistical hypothesis test [1]. Specifically, during detection, we set a theoretical false positive rate (e.g., 0.1%), calculate the corresponding statistical threshold, and then flag the sentence as watermarked if its score exceeds this threshold. To the best of our knowledge, most watermarking papers[1,2,3,4] do not perform multiple-run tests, and we have adopted similar settings. Moreover, we provide results across multiple datasets and LLMs to mitigate the effects of randomness. > Q2 Missing computational efficiency analysis/analysis of the number of queries. A2: Please refer to Response A2 in our rebuttal to reviewer 5v1A for a detailed discussion on query efficiency. > Q3 Some formulas need to be polished in formats. Some references should be referred to correctly Thank you for your valuable suggestion. We sincerely apologize for the mistake. We will revise the formatting and correct the references accordingly in the version. > Q4 In section 4, the authors should point out that detailed proofs of theorems are in Appendix XXX. We apologize for the confusion. In the revision, we will clearly link the proofs of the theorems within the main body of the text. > Q5 Comparision against Attacking llm watermarks by exploiting their strengths The setting of our work fundamentally differs from that of Pang, Qi, et al. [5], which represents an early exploration of watermark removal under relatively simple assumptions. Specifically, their attacks are limited to the following scenarios: - Naive token editing (Sec. 4): This method inevitably leads to degraded text quality. - Multi-key scenario (Sec. 5): This is a trivial case where a simple averaging technique suffices, in contrast to our focus on the more practical and challenging one-key scenario. - Detector-available scenario (Sec. 6): Prior studies, including this one, have shown that attacks in this setting are relatively easy. In contrast, our work addresses the more difficult detector-unavailable scenario (D0 setting, detailed in Sec. 3.1, Lines [138–143]). Therefore, we believe a direct comparison would be neither meaningful nor informative. > Q6 I would like to see a detailed comparison between the selection of h We kindly note that additional results regarding the selection of $h$ are provided in the Appendix Figure 6 and Figure 7. In response to the recommendation, we have also included results for identifying the prefix n-gram length with $h$ = 2, 4, 6 and 8 in Figures 9–10, available via this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub). The results show that our method generalizes well across different selections of $h$. > Q7 Theorem 4.2 provides bounds on distribution gaps, but there's no experimental validation of how tight these bounds are in practice. Can you provide empirical measurements of actual distribution gaps compared to the theoretical bounds? We empirically evaluated our method on 3,000 token distributions from the MMW Book Report dataset using the LLaMA3.2 3B model. Our results show that $\mathbb{E}[\frac{\mathbb{TV}(P_R,P)}{\epsilon_1 f_2(\epsilon_1,\epsilon_2)+ (1-\epsilon_1) f_1(\epsilon_1,\epsilon_2)}]=0.328$ indicating that the distribution gap is well-controlled by the bound. Moreover, our GPT score evaluation results in Table 1 and Table 2 further confirm that the generated text remains highly consistent with the original in distribution, demonstrating minimal distribution distortion and strong preservation of text quality. [1] A watermark for large language models, Kirchenbauer et al., ICML 2023 [2] Robust Distortion-free Watermarks for Language Models, Kuditipudi et al., TMLR 2023 [3] Unbiased Watermark for Large Language Models, Hu et al., ICLR 2024 [4] DiPmark: A Stealthy, Efficient and Resilient Watermark for Large Language Models, Wu et al., ICML 2024 [5] Attacking LLM Watermarks by Exploiting Their Strengths, Pang et al, ICLR 2024 Workshop --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the thorough and thoughtful response. Your clarifications have addressed nearly all of my concerns, particularly around statistical testing, query efficiency, and theoretical assumptions. Regarding concerns from other reviewers about scope: while the proposed method focuses on n-gram-based watermarking, this is still a meaningful contribution. The KGW watermark remains one of the most influential schemes in the field, and providing a principled and effective attack against it—both theoretically and empirically—is valuable. I hope other reviewers will take this into account when evaluating the contribution. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful and encouraging feedback. We truly appreciate your recognition of the novelty and value of our work, and we are glad that our clarifications effectively addressed your concerns. Thank you again for your time and constructive engagement.
Summary: - The paper presents DE-MARK, a framework designed to remove n-gram-based watermarks from Large Language Models (LLMs). It introduces a novel querying strategy, "random selection probing," to assess watermark strength and reconstruct watermarking parameters. - Unlike previous methods that rely on knowledge of the watermarking function or require paraphrasing, DE-MARK offers a general approach that estimates watermark parameters and reverses the watermarking effects while preserving the original language model’s distribution. - Beyond removal, DE-MARK can also be used to exploit watermarks by mimicking their structure, effectively generating watermarked text using an attacker LLM, which raises concerns about the security of watermarking scheme - The paper demonstrates the effectiveness of DE-MARK in both watermark removal and exploitation tasks on models such as Llama3 and ChatGPT, achieving a significant drop in watermark detectability without degrading text quality. ### Nits and Prior Relevant Work Which has not been cited: - Line 102: However, The -> However, the Claims And Evidence: - The paper demonstrates that DE-MARK significantly reduces watermark detectability in models like Llama3 and Mistral. After applying DE-MARK, the true positive rate (TPR) of watermark detection drops from over 60-90% to below 20%, even at low false positive rates. The authors also provide theoretical guarantees that the post-removal distribution remains close to the original model distribution. (Table 1) - Unlike previous methods that require access to the underlying hash function or paraphrasing tools, DE-MARK estimates watermark parameters (like red-green token lists and watermark strength) through a novel querying strategy called random selection probing. This approach allows it to infer the watermark without direct access to its configuration. - The paper shows that after learning the watermarking pattern, an adversary can generate text that mimics a watermarked LLM. Experiments on watermark exploitation confirm that DE-MARK can recreate watermarked content with high accuracy, making it possible to generate fake watermarked text. - The paper uses the GPT score as a metric to assess text quality before and after watermark removal. Results show that even after applying DE-MARK, the GPT score remains nearly the same, indicating that the method does not degrade fluency, coherence, or correctness in the generated text. Methods And Evaluation Criteria: - The paper does a good job of explaining why watermark removal is an important and underexplored area. The motivation is clear, and the threat model is well-defined. - Unlike previous work that relies on paraphrasing or explicit knowledge of the watermarking function, DE-MARK estimates the watermark structure through a statistical approach. The authors also provide a theoretical bound on the difference between the original and post-removal distributions, which strengthens the credibility of their method. - The authors evaluate DE-MARK on multiple models, including Llama3 and Mistral, and even apply it to ChatGPT in a real-world case study. The variety of models tested helps support the claim that DE-MARK generalizes well across different LLMs - The evaluation considers both watermark detectability (TPR@FPR) and text quality (GPT score). This shows not only how well DE-MARK removes watermarks but also that it does not degrade output quality. Theoretical Claims: - The authors derive an upper bound on the total variation distance between the original and post-removal language model distributions. This is an important result because it provides a theoretical guarantee that DE-MARK does not significantly alter the statistical properties of the model’s output. - The bound explicitly depends on two key error terms: (1) misclassification error $(\epsilon_1)$, which accounts for incorrectly identified watermark tokens, and (2) $estimation error (\epsilon_2)$, which quantifies how accurately DE-MARK estimates the watermark strength. - The proof in the appendix looks correct based on working out the steps myself though more details could be added to make it easier to follow. Experimental Designs Or Analyses: - The paper evaluates DE-MARK on multiple language models, including Llama3 (3B & 8B), Mistral 7B, and ChatGPT. - This variety ensures that the method is not tied to a specific model architecture and can generalize across different LLMs. - The authors measure watermark detectability before and after applying DE-MARK using true positive rate (TPR) at fixed false positive rates (FPR). - Results show that after DE-MARK is applied, watermark detectability drops significantly, indicating successful removal. - The paper also evaluates the impact of DE-MARK on text quality using GPT-score, showing that the removal process does not significantly degrade generation quality. Supplementary Material: I went through the proofs in the appendix in detail Relation To Broader Scientific Literature: 1. The paper builds on prior work in statistical watermarking for LLMs (e.g., Kirchenbauer et al. (2023)), which proposed methods to embed watermarks into AI-generated text for provenance tracking. 2. It directly addresses recent security concerns raised by Jovanović et al. (2024) on the feasibility of watermark stealing and removal attacks. 3. By providing a provable theoretical bound on watermark removal effectiveness, DE-MARK contributes a formal framework to this discussion, which is a notable improvement over purely empirical attack demonstrations. Essential References Not Discussed: None Other Strengths And Weaknesses: - The paper focuses on n-gram-based watermarks, but modern watermarking techniques are evolving. Would DE-MARK still work against semantic watermarks or distortion-free watermarking? Even if not tested, discussing this in the limitations section would make the paper stronger. - The method requires multiple queries to reconstruct the watermarking scheme, which could be computationally expensive. It would be helpful if the paper included an analysis of query efficiency—how many queries are typically needed to achieve successful watermark removal? Could optimizations reduce this cost? - While DE-MARK is tested on ChatGPT, the paper does not extensively evaluate settings where log probabilities are entirely unavailable (a common scenario for many commercial LLM APIs). - The assumption about gaussian noise also seems like a strong and arbitrary assumption to me without any empirical evidence. - In addition to the unbiasedness of the estimate it would also be helpful to do a consistency analysis of the estimator i.e. how fast does the estimate converge? How many queries are required to decode a watermark and the practical feasibility of DEMARK. Other Comments Or Suggestions: - Addressing practical concerns I raised above such as scalability, robustness to different watermarking techniques, and real-world constraints (e.g., no probability access) would make the paper even stronger. If these aspects were explored further, the work would have a broader impact and provide even more value to the AI research community. Questions For Authors: One thing that isn’t super clear to me is that it seems this watermark removal assumes a KGW watermarking scheme. There are many more watermarking schemes which are also n-gram watermarking schemes but don't explicitly add a delta to the logit params and also have distortion free property such as the Gumbel watermark by Aaronson. Would this watermark removal scheme also work on that? If not, I suggest authors to tone down the claims in the paper. Open to more discussion on this. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: Would DE-MARK still work against modern watermarking techniques (e.g. distortion-free methods like Gumbel watermark by Aaronson)? Is it specially designed for KGW? A1: Our proposed methods can be generalized to most n-gram-based methods, and we add additional experiments for two popular distortion-free n-gram-based watermarking algorithms to support our claim, $\gamma$-reweight[1] and DiPmark[2]. Please see Table 13 and Table 14 in this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub). The results show that our method can effectively remove the watermarks. However, it is not possible to remove the Gumbel watermark, as it only produces a one-hot probability distribution, i.e., assigning a probability of 1 to a single token and 0 to all others. This output lacks sufficient information to theoretically bound the difference between the original and post-removal distributions of the language model. > Q2: Analysis of query efficiency? A2: We first present a detailed time efficiency analysis to show that our time cost is acceptable, then we show some acceleration methods to further reduce the time cost. **Time Efficiency Analysis** The experiments were conducted on a single RTX 6000 Ada GPU. Table 11 and Table 12 in this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub) present the time efficiency of watermark stealing and removal with respect to varying numbers of queries. For De-mark, we require $m(m - 1)$ queries per token. In our experiments, we set $m = 20$, resulting in a total of 380 queries per token. While this number may seem high, it is important to note that repeated querying is a necessary strategy to comprehensively gather the required information. Despite this, our experimental results indicate that the time cost is acceptable when compared to the baseline. The computation remains efficient due to the following reasons: - Each query is very short; - The query templates are identical across inputs, allowing for the reuse of most pre-computed key-value pairs, which significantly accelerates computation; **Acceleration method** For watermark identification, the time cost is already low, and we find no additional optimization necessary. For watermark removal and exploitation, we can achieve 2$\times$ to 4$\times$ speedup by randomly removing token pairs in Alg. 1. This introduces a trade-off between removal performance and inference speed. Additionally, increasing the value of $\eta$ can yield improved outcomes, the results are presented in Table 15 in the [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub). A more sophisticated token pair clipping strategy may also lead to further performance improvements. > Q3: The paper does not extensively evaluate settings where log probabilities are entirely unavailable (a common scenario for many commercial LLM APIs). A3: Regarding watermark identification and exploitation, we kindly point out that we have conducted extensive experiments in the black-box setting (i.e., where log probabilities are entirely unavailable), as shown in Tables 2, 3, and 9, and Figures 4, 5, and 7. As for watermark removal, as discussed in Section 3.1, when log probabilities are completely unavailable, it is impossible to bound the gap between the original and post-removal distributions of the language model. This is because the output provides only a single token, which is insufficient to recover the full distribution. > Q4: The assumption about Gaussian noise also seems like a strong and arbitrary assumption A4: Thank you for pointing this out. Actually, in Appendix E. Proof of Theorem 4.1, the gaussian noise assumption is not necessary for the proof, we only require the noise to be symmetrically distributed (i.e., $P(\epsilon=x)=P(\epsilon=-x),\forall x \in \mathbb{R}$), which is a mild and intuitive assumption. We will update the assumption accordingly in our revision. > Q5: How fast does the estimate converge? How many queries are required to decode a watermark A5: Demark has relatively low computational cost, which enables us to even increase the number of queries for more accurate estimations. Please refer to Figure 8 in this [anonymous link](https://docs.google.com/document/d/e/2PACX-1vSfxtMpq2yL7QjOjW0NNWgI_J4LG9QHes7eBtj4P7LqdrIVBTuibloz0p0LLG5dhijwS7UhFcVfw537/pub) for the convergence speed of estimating $\delta$. We evaluate the performance using different values of $m$ in Algorithm 4 and vary the parameter $c$ (set to 1, 2, 5, 10, 20, 50, and 100) to generate different numbers of queries. [1] Unbiased Watermark for Large Language Models, Hu et al., ICLR 2024 [2] DiPmark: A Stealthy, Efficient and Resilient Watermark for Large Language Models, Wu et al., ICML 2024
null
null
null
null
null
null
null
null
Learning Extrapolative Sequence Transformations from Markov Chains
Accept (poster)
Summary: The authors consider the task of maximizing a function s(X), like sentiment or predicted protein activity, in a discrete space, which is a challenging task. Te baseline they consider is (annealed?) MCMC with proposals from a pre-trained model. Instead, they suggest running MCMC for some amount of time and then training a language model on these chains so that it can learn the features of sequences that tend to lower s(X). Claims And Evidence: see summary. Methods And Evaluation Criteria: see summary. Theoretical Claims: see summary. Experimental Designs Or Analyses: see summary. Supplementary Material: No. Relation To Broader Scientific Literature: See Summary. Essential References Not Discussed: None I know of. Other Strengths And Weaknesses: **Strengths:** * I really appreciated the toy example in section 2. **Weaknesses / questions:** * Could you compare to this paper that also trains on improving pairs of data? https://arxiv.org/pdf/2405.18075 * Why did you not anneal in your MCMC? * Why don't you compare with other discrete optimization methods? Ex. https://openreview.net/forum?id=ZMP0Bki9aK * One could spend the compute training q_\theta on running more MCMC. Is this better? Could you describe how you accounted for this in your experiments (it seems to me you did not)? * In section 2, you suggest your model can be useful even if you only have an approximation of s. Could you demonstrate this?Typically in these cases, especially protein design, one performs iterative design, iteratively making measurements and improving the approximation of s. Would your method work well here? Can you demonstrate that? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful and detailed review of our paper. We are grateful for the feedback, and address your main points below: > “In section 2, you suggest your model can be useful even if you only have an approximation of s. Could you demonstrate this?” In our extrapolation settings (sentiment and protein design), we intentionally train imperfect approximations of the true property we are attempting to optimize; the extrapolation setting assumes that we do not have data outside a certain range, and that we can train a scorer to approximate the oracle, but that that scorer will not be reliable outside of the range of values it was trained on; therefore reviews outside the training region are OOD for our guide, which may return unreliable predictions in this extrapolation range (see e.g. https://arxiv.org/abs/2006.10108). Specifically, the sentiment scorer is trained on reviews between 2 and 4 stars, and the protein scorer is trained with values above -5, despite the objective being to generate reviews or proteins beyond that range . We therefore believe that our results demonstrate that our model is useful even with an imperfect approximation of s. >Could you compare to this paper that also trains on improving pairs of data? Thank you for the reference; this is a very interesting approach that we will include in our discussion. The key difference with our work is that PropEn performs optimization in latent space and subsequently decodes from that latent space. However, latent space models such as VAE's are well-known to suffer from posterior collapse when using highly flexible generative models such as the large language models we use in our experiments. To sidestep this issue, we perform both search (MCMC) and optimization ($q_{\theta}$) in discrete space. As such, we compare with other pair-learning based methods that operate in discrete space. Our primary baseline is ICE, which also trains on improving pairs of data and is state-of-the-art for the tasks we consider. Additionally, PropEn focuses on single-property enhancement, while our tasks as implemented require multi-attribute optimization: protein stability and similarity to original protein for protein engineering, fluency and sentiment for the sentiment task, and EER and semantic similarity for anonymization. >Why don't you compare with other discrete optimization methods? Ex. https://openreview.net/forum?id=ZMP0Bki9aK While in principle this approach is complementary to ours, since we could benefit from these same proposals to improve the efficiency of our MCMC step, in practice it would not be straightforward to apply this approach to the language modeling applications we consider here, due to the size of the vocabulary and sequence lengths. Generally speaking, we view improvements to MH for language models as a promising area for future work, which is complementary to our direction here about learning efficient extrapolation models $q_{\theta}$ on the basis of the Markov chains from MH. We expect that more efficient exploration will lead to a more efficient exploration step and a more effective $q_{\theta}$ for the same amount of compute. > Why did you not anneal in your MCMC? We employ MCMC to explore sequence transformations that improve the objective, and then use our learned $q_{\theta}$ to greedily maximize this objective. While annealing MCMC could help it attain a better local optima, this could actually be counterproductive to the goal of exploring productive sequence-to-sequence transformations. Compared to other works such as https://openreview.net/forum?id=ZMP0Bki9aK above, we seek to efficiently optimize using $q_\theta$ in as few steps as possible, while annealing would require a much larger number of steps. > “One could spend the compute training $q_{\theta}$ on running more MCMC.” While it is true that the time spent training $q_{\theta}$ could be used to run MCMC for longer, the training cost is a fixed cost that occurs offline, while running further iterations of MCMC is an online cost which would add more computational expense for each inference example. We emphasize that once $q_{\theta}$ is trained, it can be applied to new instances without further MCMC steps. Training $q_{\theta}$ allows for rapid inference on unseen examples, while MCMC requires an expensive sampling process for each inference example. --- Rebuttal Comment 1.1: Comment: This mostly addresses my concerns. I appreciate most discrete optimization methods assume a small alphabet; thanks! It would however still strengthen the paper to include the cost of training $q_\theta$ when running MCMC -- in the protein case you only want to find one optimum so online vs offline is not a relevant distinction. Maybe a good thing to test would be just: how many MCMC iterations would it take to reach the optima reached by your method reported in the paper, if it's even reached at all? 5x? 20x? But not crucial for this rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for the idea for this experiment. We agree that in cases where we optimize from the same starting point, this is an interesting question, and will plan to update our paper with this analysis.
Summary: The authors propose a method to efficiently perform extrapolative generation, optimizing a property of interest. Specifically, they train an autoregressive model to predict new sequences or states that enhance the desired property, using training samples obtained from MCMC. They evaluate their approach across three domains: sentiment optimization, protein engineering, and text anonymization. To investigate whether the autoregressive model benefits from intermediate states, the authors conduct an ablation study on training episode design. They compare single-step training episodes with multi-step episodes, exploring different selection strategies — uniformly sampled states versus transitions that yield high relative improvement. ## Update after rebuttal In the rebuttal, my questions have been adequately addressed. I still think this work as an interesting and valuable contribution to the conference, and I find this opinion to be somewhat supported by the other reviewers. Therefore, I will maintain my original score. Claims And Evidence: Yes, the claims made are well supported by convincing evidence. Main claim/conclusion: > We find that the autoregressive model can extrapolate as well or better than MCMC, but with the additional benefit of significantly higher sample efficiency. The main conclusion is well supported by Table 1, Table 2, and Table 3. This is especially convincing due to the fact that the authors' approach has been evaluated on three different domains. Methods And Evaluation Criteria: Experiments have been performed for the three different domains mentioned above. Experimental setups seem reasonable. Theoretical Claims: -- Experimental Designs Or Analyses: * The experiments comparing the proposed approach to MCMC appear well-motivated and relevant. * Autoregressive refinement: The results in this section seem a bit inconclusive. While intermediate state generation improves performance in the protein domain, its impact on anonymization and sentiment tasks remains unclear. Could the authors provide an intuition — or ideally, a deeper analysis — on when and why autoregressive refinement is beneficial in different scenarios? Additionally, when constructing training episodes, how should the hyperparameter controlling the number of included states be set or adjusted for different tasks? Supplementary Material: I quickly read through Section B on extrapolation experimental details and Section C on hyperparameters. Relation To Broader Scientific Literature: The authors' work builds on [1]. Because of their relevant experimental setup and their good results, I think the authors provide a significant contribution to the community. [1] Padmakumar, Vishakh, et al. "Extrapolative controlled sequence generation via iterative refinement." International Conference on Machine Learning. PMLR, 2023. Essential References Not Discussed: -- Other Strengths And Weaknesses: ### Other Strengths: - Clarity: The paper is very-well written with a clear scope. Figure 1 and the toy example help ease the reader into the paper. The authors provide their rationale whenever needed to understand their assumptions and hypotheses. ### Other Weaknesses: - Missing information on computational burden: While this may be a minor issue, it would have been helpful to include more details on the additional computational cost associated with model training and inference. If negligible, the reported differences in the needed iterations indeed should be a strong argument for the efficiency of the proposed method. Other Comments Or Suggestions: -- Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the detailed and thoughtful feedback, and aim to address remaining questions and concerns here. > “While intermediate state generation improves performance in the protein domain, its impact on anonymization and sentiment tasks remains unclear.” In the case of our sentiment task, there is no clear benefit to adding states: the task is sufficiently simple that a single edit is the most effective way to extrapolate, hence why we consider first/best to be the best method in this case. In more complex tasks we find it to be more important to have additional steps as there can be a generalization benefit, as in protein synthesis and anonymization. Additional edits consistently decrease semantic similarity for anonymization but improve EER up to a certain point, after which the SBERT decreases without consistent improvement in EER. We select our number of states based on this point. | Episode length | EER | SBERT| | ------------- | :-------------: | :-------------: | |2 (first/best)| 0.132 | 0.923 | | 3 | 0.161 | 0.857 | | 4| 0.15 | 0.827| | 5 | 0.224 | 0.835 | | 6 | 0.187 | 0.776 | | 7 | 0.198 | 0.762 | | 8 | 0.2 | 0.745 | > Missing information on computational burden: While this may be a minor issue, it would have been helpful to include more details on the additional computational cost associated with model training and inference. Thanks for the suggestion! Indeed, the computational burden reported in our tables is purely inference-time computation. We recognize that our model has the additional computational burden of generating training data and fine-tuning a model on that data. However, this is a single fixed cost at training time. $q_{\theta}$ takes seconds for each inference example where MCMC takes ten minutes or longer for a single batch at inference time. The number of iterations in our tables are meant to provide an understanding of the scale of the ongoing computational cost at inference time, and does not account for the fixed (“offline”) training cost. We will be sure to include more details on the offline training costs in our revisions. --- Rebuttal Comment 1.1: Comment: Dear Reviewers, thank you for your answers. My questions have been addressed. I will maintain my original rating.
Summary: This paper proposes an improvement to the MCMC algorithm, specifically the random search methods in the Monte Carlo exploration. Instead, a model trained from the MCMC searching trajectories is applied to greedily optimize the properties of interest. Empirically, the proposed method is able to sample efficiently and extrapolate well in natural language and biological tasks. Claims And Evidence: Most claims in the submission are supported by convincing evidence. However, there are several gaps in the proposed method and experiments to support the novelty and performance of this paper. For more comments, please refer to details in "Methods and Evaluation Criteria" and "Experimental Designs or Analyses" sections. Methods And Evaluation Criteria: Most parts of the proposed methods and evaluation make some sense. However, there remain several issues: - A theoretical perspective of why $q_{\theta}$ leads to more efficient sampling and what makes an optimal $q_{\theta}$ for the MCMC algorithm is missing. - Although the proposed method effectively reduce the sampling iterations, the MCMC sampling to generate data for the training process of the model should also be considered, and an ablation on how the number of iterations for data generation in model training affect the model performance would be helpful. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: - It would be helpful to add an ablation of the model performance with different $q_{\theta}$ and compare to demonstrate the key components that lead to the best $q_{\theta}$. - The paper does not compare to model based optimization methods in reinforcement learning, for example,, [1]. [1] Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization. NeurIPS 2024. Supplementary Material: I review most parts of the supplementary material. Relation To Broader Scientific Literature: The paper is related to guided sampling and optimization methods, as well as applications in both natural language and biological domains. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper proposes an interesting method that utilizes a learned policy to replace stochastic exploration approaches. More theoretical perspectives on sampling efficiency and discussions on the method's extrapolation capability would strengthen the paper. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the feedback offered in this review, and hope to address some concerns. >”A theoretical perspective of… what makes an optimal $q_{\theta}$ for the MCMC algorithm is missing.” We agree theoretical grounding is important. Nonetheless, compared to previous approaches, we believe our approach has demonstrated its robustness across three different settings, with minimal problem-specific tuning. Furthermore, though the learned extrapolation model $q_{\theta}$ lacks theoretical guarantees despite its empirical success, the MCMC search procedure that is the basis for fitting $q_{\theta}$ inherits the usual benefits of MCMC. Since extrapolative generation is an understudied area of deep learning, we hope our contribution can motivate more theoretical results in the future. > “...the MCMC sampling to generate data for the training process of the model should also be considered…” While we note the reviewer’s concern, we emphasize that although we use MCMC as a source of training data for $q_{\theta}$, MCMC is not required on new problem instances; $q_{\theta}$ can be applied to new starting sequences without MCMC if the extrapolation criteria are the same. Thus, besides the extrapolation benefits of $q_{\theta}$, it also effectively amortizes the sampling process. > “an ablation on how the number of iterations for data generation in model training….” We appreciate this suggestion and present our ablation of limited MCMC exploration on our anonymization task. We limit the MCMC chains to 50% and 25% of their length before constructing training episodes. Below, we show that increasing resources spent during data generation leads to improvements in our trained $q_{\theta}$ model. We consider this to be a strength of our approach; the theoretical properties of MCMC allow for more exploration and better fit of the target distribution as MCMC is run for more steps, offering a mechanism to improve generalization at the expense of further offline training time. In particular, the table below suggests that running MCMC for longer (e.g., 150%, 200%) could lead to better results than presented in the paper. | Exploration | EER | SBERT | ------------- | :-------------: | :-------------: | | 25% |0.2 | 0.701 | | 50%| 0.188 | 0.781| | 100% | 0.224| 0.835 | > “...compare to demonstrate the key components that lead to the best $q_{\theta}$…” Regarding ablations of $q_{\theta}$, thank you for the suggestion. We do analyze several important components of $q_{\theta}$ in the text of the paper, most notably the data selection strategy (Section 3.4) and reward choice (Appendix A). After following reviewer suggestions, we can now additionally present results for the effects of the number of iterations for data generation (see above) and for the effects of episode length on the anonymization task: | Episode length | EER | SBERT| | ------------- | :-------------: | :-------------: | |2 (first/best)| 0.132 | 0.923 | | 3 | 0.161 | 0.857 | | 4| 0.15 | 0.827| | 5 | 0.224 | 0.835 | | 6 | 0.187 | 0.776 | | 7 | 0.198 | 0.762 | | 8 | 0.2 | 0.745 | We find that additional edits improve EER at the expense of decreasing semantic similarity; this is only true up to a certain point, after which the SBERT decreases without consistent improvement in EER, demonstrating the importance of this component in $q_{\theta}$. Multiple iterations are similarly useful for protein synthesis. We are happy to discuss additional factors should there be further concerns. For example, the paper as it stands focuses on a basic autoregressive architecture for $q_{\theta}$. However, it is likely that our results could be improved, for example by initializing from larger pre-trained models for $q_{\theta}$ or employing different training strategies. Regarding comparisons to further baselines, we agree this would strengthen the work. To our knowledge, our comparisons include SOTA extrapolative baselines for the challenging extrapolation tasks we consider, but of course there are many other tasks to consider, such as the design of cell-type-specific promoters for gene delivery considered in [1]. However, the complex approach described in [1] involves a very application-specific pipeline involving 5 different steps which would be hard to adapt to our more general setting. On the other hand, our approach is largely application-agnostic provided a suitable pre-trained LM is available. Regarding comparisons to RL-based methods such as [1], our approach may be viewed as performing off-policy reinforcement learning, even though we do not explicitly cast it that way (see e.g. https://arxiv.org/pdf/2106.02039). We considered using a more overt framing of the work as RL, but felt the additional notational baggage made it harder to understand the main idea. We view our contribution as a first effort to adapt off-policy RL to extrapolation settings by bridging RL and MCMC. More discussion of this point will be included in the paper. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. It resolves most of my concerns and I increased my score.
Summary: This paper presents a new approach for extrapolative sequence generation tasks, utilizing sequences produced through Markov Chain Monte Carlo (MCMC) exploration as training data. This approach targets tasks that necessitate the generation of new sequences exceeding previously recorded property values, such as in protein engineering, sentiment control, and text anonymization. The method initially utilizes MCMC to investigate and sample sequences that optimize specific target properties. The sampled Markov chains are employed to train an autoregressive model that predicts advantageous sequence transformations in fewer steps, thereby extrapolating beyond the original data distribution. This approach enhances sample efficiency by decreasing the number of necessary inference steps, while preserving sequence fluency and semantic coherence. Claims And Evidence: The evidence presented in the paper effectively supports the authors' claims. The experimental methodology is robust, and the results demonstrate significant advancements compared to previous methods in the specified context. The assertions regarding enhanced extrapolation capability and sample efficiency are supported by the data, with only minor caveats as discussed. Methods And Evaluation Criteria: The current benchmarks are essential and strong enough to support the claims. Theoretical Claims: This paper introduces the technical and theoretical part clealy. Experimental Designs Or Analyses: The evaluation in the paper uses three benchmark tasks in distinct domains – protein engineering (ACE2 stability), text sentiment style transfer (Yelp reviews), and text anonymization. This selection is a strong point of the work: it covers both biological sequence generation and natural language generation with different kinds of target properties. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weakness: a potential limitation of the approach itself is its dependence on the quality of the scoring function and MCMC samples. If the scorer is mis-specified or the MCMC exploration is very limited, the AR model will learn a suboptimal strategy. In principle, one could consider enhancements like adaptive sampling (to focus on more promising regions of state space) or use an ensemble/agreement of multiple scorers to mitigate bias. Other Comments Or Suggestions: 1. The current framework typically utilizes initial sequences for transformation, such as the starting review or the original author’s text, that fall within the training distribution by design; for instance, a 3-star review may serve as a basis for generating a 5-star output. The system's performance under unusual initial states or noisy inputs has not been assessed. For example, in cases where a review is excessively lengthy or incorporates sarcasm, which may be underrepresented in the training data, can sentiment extrapolation maintain its reliability? 2. The diversity of the generated proteins from the current version remains unclear, which is significant in the context of protein engineering. In the context of model extrapolation, there exists a risk of mode collapse, wherein the model may identify and rely on a specific strategy or template to attain elevated performance across multiple outputs. The sentiment model may learn to append a standardized highly positive statement to the conclusion of each review to ensure a 5-star prediction, rather than genuinely altering the review in a diverse manner. The present assessment did not quantify the diversity or uniqueness of outputs. Questions For Authors: Please refer to the sections above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thoughtful response. We address some of the raised questions here. >”If the scorer is mis-specified or the MCMC exploration is very limited, the AR model will learn a suboptimal strategy…” These are valid concerns. A strength of our approach is that it benefits from the wealth of more sophisticated MCMC approaches; while we choose to use a basic sampler in this work, approaches such as the suggested adaptive sampling could lead to improved MCMC chains and thus better performance for $q_{\theta}$. We consider this to be a strength of our approach, as the theoretical properties of MCMC allow for more exploration and better fit of the target distribution as MCMC is run for more steps, offering a mechanism to improve generalization at the expense of further offline training time. > “... In the context of model extrapolation, there exists a risk of mode collapse…” We acknowledge that there is a possibility of mode collapse for extrapolation tasks, and that we should examine the diversity of our outputs. In consideration of this point, for our protein task, we counted the number of unique outputs, finding that 100% of the 10k proteins generated using $q_{\theta}$ were unique. For sentiment and anonymization, we ran corpus BLEU score between all pairs of generated sentences, finding that we achieve only 1.39 BLEU for sentiment and 0.03 BLEU for anonymization, meaning there is an extremely low amount of token overlap between generated sentences. Furthermore, in Appendix E we show randomly selected examples of our generated outputs for sentiment and anonymization, which demonstrate significant sample variance. In light of this evidence, we believe it is unlikely that mode collapse is occurring.
null
null
null
null
null
null
Improving Your Model Ranking on Chatbot Arena by Vote Rigging
Accept (poster)
Summary: The authors find that an attacker can meaningfully boost (or diminish) a target model’s ranking even when that model does not appear in the rigged battles. New votes cast in entirely different matchups, where the target never competes, can still change the target’s overall standing because all of the models’ ratings become tightly interconnected in the Elo/Bradley-Terry system. ## update after rebuttal I keep my score as a strong accept after reviewing comments made during the rebuttal period. Claims And Evidence: The authors compare a straightforward target-only rigging approach (manipulating only battles involving the target model) against their proposed omnipresent rigging strategies (Omni-BT and Omni-On) that manipulate every vote. Their experiments demonstrate that the omnipresent methods achieve significantly higher ranking improvements. To assess robustness, they simulate various adversarial scenarios by: -Varying the accuracy of the de-anonymization classifier (introducing a mix of anonymous votes) (Table 1) -Altering the sampling distribution so that the target model appears less frequently (Table 2) -Incorporating concurrent genuine user votes (Table 3) Methods And Evaluation Criteria: The authors set up a series of simulation-based experiments using real historical votes from Chatbot Arena (about 1.7 million recorded votes). In those simulations: -Compare different rigging strategies (target-only vs. “omnipresent”) on various target models to see how effectively each method can boost (or lower) a model’s ranking. -Vary the “threat model”: for instance, they assume different levels of attacker knowledge (access to raw vote data vs. only the leaderboard, perfect vs. imperfect model-identity detection, etc.). -Introduce concurrent user votes (other normal votes keep rolling in) to see if real-time voting dilutes or blocks an attacker’s manipulations. -measure how many rigged votes are needed to get a certain rank increase, comparing the efficiency of voting only in matchups with the target model (“target-only”) vs. voting in all matchups (“omnipresent”)—the latter proves more efficient. -try simple defenses (like filtering suspicious votes or identifying malicious users) and see whether those actually prevent the rigging, which further highlights the system’s vulnerability. They show that with only a few thousand malicious votes, an attacker can achieve a meaningful jump in a model’s rank. The authors propose three methods of attack: -Target-Only Rigging: This method only manipulates battles in which the target model directly participates. When the target is detected, the strategy votes in its favor. While straightforward, it’s inefficient because the target appears in only a small fraction of all battles. -Omni-BT Rigging: This approach leverages the interconnected nature of the rating system. Instead of only affecting battles involving the target, it manipulates every battle by first de-anonymizing the participants. The strategy then chooses the vote outcome that, when applied, maximally increases the target model’s rating relative to the model immediately ahead of it. It requires access to the complete historical vote data to compute how each potential vote would affect the overall rankings. -Omni-On Rigging: Designed for situations where only the current public leaderboard is available (and not the full historical vote records), this method approximates how a new vote would change the ratings using an online update mechanism. It then selects the vote outcome that best boosts the target model’s position relative to its closest competitor. This method is more practical when the adversary has limited access to detailed vote histories. Theoretical Claims: No theoretical claims need be checked in this work. Experimental Designs Or Analyses: I only reviewed the material in the uploaded PDF file. Supplementary Material: I only reviewed the material in the uploaded PDF file. Relation To Broader Scientific Literature: I am not an expert in the leaderboard methods but am familiar with the lmsys leaderboard. Essential References Not Discussed: I am not an expert in the leaderboard methods but am familiar with the lmsys leaderboard. Other Strengths And Weaknesses: Strengths: The paper demonstrates that adversaries can dramatically manipulate model rankings by strategically rigging a relatively small number of votes. This is particularly important as these ranking systems (and lmsys' leaderboard in particular) have large impacts on the perception of how these models are perceived. I appreciate the novelty in the fact that the proposed method is particularly effective at impacting the reliability and integrity of vote-based evaluation systems because it does not require rigged votes that involve the target model only. Where is this claim in the experiments validated "showing that omnipresent rigging strategies can improve model rankings by rigging only hundreds of new votes"? This is mentioned in a couple of areas in the text. It seems like the number of rigged votes is on the order of thousands based on the experiments. Other Comments Or Suggestions: "Chtbot Arena" "Copilot Areana" Questions For Authors: Does this method work if the leaderboard simply adds a delay before revealing to the attacker how their voting impacted results? How would it impact the Omni-On and Omni-BT methods? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your strongly supportive review and insightful suggestions. Below we respond to the comments in **Weaknesses (W)**, and **Questions (Q)** and will fix the typos in the paper revision. --- ***W1: About the claim in the experiments validated "showing that omnipresent rigging strategies can improve model rankings by rigging only hundreds of new votes".*** In our demonstration (Figure 1), our omnipresent rigging strategy requires around 900 rigged votes to achieve a one-rank promotion for the *Phi-3-small-8k-Instruct* model from its initial ranking position. Following your suggestion, we will revise the wording in the paper to clarify this claim and better reflect the experimental evidence. --- ***Q1: Does this method work if the leaderboard simply adds a delay before revealing to the attacker how their voting impacted results? How would it impact the Omni-On and Omni-BT methods?*** Thank you for the insightful question. In fact, our experiments in $\\textrm{\\color{blue}Section 5.3}$ (page 5), which investigate *rigging with concurrent user voting*, are designed to reflect the exact scenario you described. In this setting, the attacker does not have access to the real-time impact of their votes due to interference from other users’ concurrent voting (denoted as $\\mathbb{V}\_{O}$ as in Lines 100-103), which effectively simulates a delay in leaderboard updates and creating what we refer to as a *perturbed leaderboard*. The more concurrent votes $\\mathbb{V}\_{O}$ there are, the greater the perturbation to the leaderboard, which is analogous to introducing a *longer delay* before the attacker can observe the results of their rigging. Despite this, our results in $\\textrm{\\color{blue}Table 3}$ (page 6) show that both our omnipresent strategies Omni-On and Omni-BT maintain robust and stable rigging performance, even when up to 100,000 concurrent votes are introduced. --- ***Other Comments Or Suggestions: "Chtbot Arena" and "Copilot Areana"*** Thank you for pointing out the typos. We will correct them in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my questions. I have also looked over the other reviewer responses and interactions. I disagree with some of the points made by others that results such as this need to be observed in the wild and continue to keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the thoughtful follow-up and greatly appreciate your continued support. As a red-teaming paper, our goal is to expose potential vulnerabilities and demonstrate practical vote-rigging scenarios through reproducible experiments, without disrupting real-world benchmarks ourselves. Your encouragement and high evaluation mean a great deal to us—thank you again!
Summary: This paper investigates how Chatbot Arena can be manipulated to artificially boost the ranking of a target model. The authors first describe a target-only rigging strategy, which detects and votes exclusively for the target model whenever it appears. They then present an **omnipresent** strategy that manipulates every new vote by first identifying which two models appear in each matchup and selectively voting in a way that indirectly benefits the target model—even if it is not involved in that specific matchup. Overall, the paper underscores the vulnerability of voting-based rankings to adversarial manipulation, calls for further research on robust defenses, and highlights broader implications for any crowdsourced evaluation system. Claims And Evidence: The claims made are supported through empirical evidence. The authors demonstrate vote rigging by altering model rankings. Such evidence is provided through simulation experiments on around 1.7 million historical votes, showing that omnipresent rigging outperforms the simpler, target-only method. Methods And Evaluation Criteria: The authors rely on a two-part procedure for “omniscient” manipulative voting. First, they train or use a classifier that predicts a model’s identity from its generated responses (even though the platform anonymizes these models). Second, once the identity of each model in a matchup is predicted, a manipulative voting rule is applied. For instance, an Elo-based or Bradley-Terry–based objective guides which model should “win” to indirectly raise the target model’s score. Evaluation primarily measures how many new votes are required to shift the target model’s ranking upward by a certain amount (e.g., from rank 20 to rank 10). The authors provide side-by-side comparisons among different rigging approaches (target-only vs. omnipresent) and document how robust each approach is under various conditions. Theoretical Claims: The main approach of the paper relies on the inter-connective-ness of the Bradley-Terry model, which is theoretically sound. Although the paper’s arguments are mostly kept at an intuitive or “informal proof” level as mentioned by the authors, the reasoning is coherent and standard results on Bradley-Terry models generally support the claim that any single vote in a pairwise battle influences, in principle, the final Bradley-Terry scores for all models, including those not involved in the particular matchup. Experimental Designs Or Analyses: **Partitioning the data**: The authors take around 1.7 million historical votes, dividing them so that 90% become the “frozen” baseline while 10% simulates other user votes happening concurrently. This is fine. **Threat Models**: They vary sampling distributions (uniform, non-uniform), anonymity levels, concurrent user votes, and the presence of unrecognized models. This is also mostly fine. **Comparison to Baselines and Defenses**: The experiments compare “target-only” vs. “omnipresent” vote-rigging strategies, and they measure how well simple detection or vote-filtering defenses can mitigate the rigging in various tables and figures. This is also fine. Supplementary Material: I believe the authors did not provide any supplementary materials, such as code to their experiments. Relation To Broader Scientific Literature: Recent studies on LLM watermarking and attribution (e.g., Zhao et al., 2024; Huang et al., 2025) have discussed the possibility of identifying a single target model and exclusively voting for it (“target-only” rigging). This paper situates those findings within the broader framework of Bradley-Terry–style ratings and extends them by introducing an “omnipresent” rigging method that can alter every new vote. Essential References Not Discussed: Although the paper cites several watermaking and LLM-attribution approaches, it might benefit from a more direct discussion of LLM-based text style classification used in deepfake text detection. Incorporating references on text classifier reliability (for instance, those from the broader generative text detection literature) could help contextualize how robust or fragile the “de-anonymizing” classification step might be in the long term. Clarifying these references would strengthen the discussion around how stable model identification remains as new models emerge or older models receive updates. Other Strengths And Weaknesses: **Strengths**: 1. The paper systematically outlines a new and important vulnerability in widely used LLM leaderboards. 2. The experimental methodology is rigorous. They run multiple ablations, threat models, and defense attempts, providing a clear picture of the rigging ecosystem. **Weaknesses**: 1. The work mainly address Chatbot Arena, while a popular benchmark, constrain the scope of the work. 2. Defense methods explored are somewhat rudimentary—primarily detection and filtering. While the paper concludes that robust mitigation is non-trivial, the work would be stronger if it proposed deeper or more systematic solutions. 3. The method’s reliance on a large labeled corpus for the classification step might be more challenging in real-world practice if brand-new models appear frequently. The authors do mention some limited resilience to “unrecognized” models but maybe a more online approaches could be beneficial. 4. The social and ethical implications of releasing or describing rigging strategies remain non-trivial, although the authors do responsibly disclaim that their demonstration is for educational purposes only. Other Comments Or Suggestions: N/A Questions For Authors: 1. Given the authors' findings, how feasible do the authors think it would be for a practical adversary to maintain undetectable rigging behavior over extended periods? A detailed response could clarify the practical implications of your strategies. 2. Can the omnipresent rigging strategy generalize effectively to other voting-based evaluation platforms beyond Chatbot Arena? An affirmative answer could significantly enhance the broader impact of the work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and for recognizing our work. Below, we respond to your comments in **Concerns (C)**, **Weaknesses (W)**, and **Questions (Q)**. --- ***C1: No supplementary materials.*** We would like to clarify that we have provided *Supplementary Material* and the folder named *rigging_code* contains the code. --- ***C2: Lack of discussions of LLM-based text classification.*** Thanks for the helpful suggestion. We have included related works on LLM-based text classification in the paragraph *Discussions on strategies to identify LLM through model responses* (page 15). In the final revision, we will move this discussion to the main paper and expand it in detail. --- ***W1 & Q2: Can rigging strategies generalize to other voting platforms?*** Our strategy generalizes effectively to other voting platforms. To verify this, we simulate on *WebDev Arena* and *Copilot Arena*. Since they do not provide historical voting data, we initialize the Bradley-Terry model’s coefficients with public scores and update the leaderboard with newly submitted votes. Results in **Table B** show consistent improvements by rigging 500 votes, validating the effectiveness. **Table B: Rigging simulation on other voting-based leaderboards. The results are absolute ranking (ranking increase).** ||Model|T-Random|Omni-BT|Omni-On| |-|-|-|-|-| |WebDev|o1-mini|8 (+3)|1 (+10)|6 (+5)| ||Gemini-Exp|8 (+7)|1 (+14)|8 (+7)| |Copilot|Gemini-1.5-Flash|3 (+7)|1 (+9)|1 (+9)| ||Qwen2.5-Coder|7 (+5)|1 (+11)|2 (+10)| --- ***W2: The defense methods are somewhat rudimentary.*** We acknowledge that current defenses are relatively straightforward; however, results in $\\textrm{\\color{blue}Figure 4}$ (page 7) show strong effectiveness against several baselines, including T-Abstain, T-Tie, T-Random, and even the vanilla Omni-BT. While devising advanced defenses against Omni-On and improved Omni-BT is indeed challenging and remains an open problem, we systematically highlight rigging vulnerabilities in Chatbot Arena—a platform widely used and trusted by the community. By exposing these flaws, our paper can spark broader discussion and inspire future research to develop deeper and more robust defenses. --- ***W3: Efficiency of classifier training if new models appear frequently.*** Our RoBERTa-based classifier is lightweight and efficient to train. As shown in $\\textrm{\\color{blue}Table 7}$ (page 8), training corpus generated using *2,000 prompts* (or even fewer) per model is sufficient for high accuracy on *unseen prompts* and the training cost is around 4 GPU hours on a single A100 GPU. To further address the scalability concern, we propose a hierarchical classification design. Specifically, we can construct multiple sub-classifiers, each distinguishing among $N$ known models and an additional class labeled as *other models*. This allows us to incrementally accommodate new models without retraining the entire system. The total number of classifiers needed to cover $M$ models is $\\lceil M/N \\rceil$. We also appreciate your insightful suggestions regarding more online approaches. Techniques such as class-incremental learning [1] can indeed sequentially update the classifier using data from newly added models while mitigating the risk of catastrophic forgetting. --- ***W4: The ethical concerns of releasing rigging strategies.*** We understand and appreciate your concerns regarding the ethical implications. However, our work is positioned as a red-teaming study, with the primary goal of exposing critical vulnerabilities in widely used LLM leaderboards. As noted in the *Remark* paragraph (Lines 110–111), we proactively informed the authors of Chatbot Arena about the vulnerability prior to submission. By bringing these issues to light, we aim to raise awareness within the community and encourage the development of stronger defense to safeguard future evaluation platforms against such manipulation. --- ***Q1: How feasible is it to maintain undetectable rigging behavior?*** Rigging behavior is indeed difficult to detect, particularly under our *omnipresent rigging* which improves the ranking of a target model $m\_t$ by manipulating battles that *do not* involve $m\_t$ directly. As a result, detection is challenging for defenders focusing on the voting patterns of $m\_t$ alone. For instance, attackers can avoid detection by behaving normally in battles involving $m\_t$. Moreover, modern LLMs exhibit strong generative capabilities and often provide responses that differ subtly in style rather than in correctness. This makes it difficult for users to definitively judge which response is better. Consequently, voting outcomes can vary significantly depending on user preferences. Without a *ground-truth* ranking for reference, it is **difficult to distinguish between a ranking increase caused by vote rigging and one driven by genuine user preference**. --- ***Reference:*** \ [1] Class-Incremental Learning: A Survey --- Rebuttal Comment 1.1: Comment: Thank you for providing a detailed response. Overall, I appreciate the paper and the thoroughness. From a more idealized standpoint, I think the work tackles an important question in crowdsource evaluation. However, I am skeptical regarding how realistic this is in practice as the method would require rigging large amount of votes (begin to see meaningful differences after 4000 new votes; Figure 2), and there is no real evidence this is happening or will likely to happen. That being said, I do think it is good to raise concerns. I appreciate the work and believe my current score of leaning towards accept is appropriate. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for recognizing the thoroughness and contribution of our work. We understand the practical concerns raised and would like to further clarify the real-world feasibility of vote rigging. --- **Regarding the comment: “…how realistic this is in practice as the method would require rigging large amount of votes (e.g., 4000 new votes)…”** In practice, $\\textrm{\\color{blue}submitting rigged votes can be highly efficient}$. The actual rigging pipeline on the Chatbot Arena platform involves just three steps: **(1) de-anonymization by $\\mathcal{A}$, (2) deciding manipulations via $\\mathcal{M}$, and (3) clicking the voting button**. The only difference between our simulation and the real-world setting lies in step (3), where bot detection mechanisms (e.g., Cloudflare or CAPTCHA) may be present. However, these protections can be bypassed using standard techniques such as IP rotation, Slowloris attacks, and CAPTCHA-solving services—or even minimal human labor. In practice, submitting a single rigged vote manually takes less than 20 seconds. With just 10 people (e.g., lab mates or hired workers), over 2,000 rigged votes can be submitted within an hour. A more organized attacker could easily scale this up to tens or hundreds of thousands of votes. --- **Regarding the comment: “…there is no real evidence this is happening or will likely to happen”** We fully agree that the goal of doing well on benchmarks like Chatbot Arena should be to create a better LLM. Unfortunately, given the expensive and sometimes unaffordable trial-and-error cost of developing a better LLM, it is possible that some LLM companies/startups/institutions will be motivated to cheat on benchmarks in order to, for example, **have high promotion impacts and successfully raise capital**. While vote rigging on Chatbot Arena is practically feasible, as outlined above, we deliberately avoided attacking the live platform out of ethical considerations. Our intent is not to contaminate its data, but rather to raise awareness as a red-teaming work, and provide reproducible simulation results that can inform the design of more robust and trustworthy leaderboard systems. --- Once again, we truly appreciate your comments and will clarify the practicality of our method in the final version of the paper.
Summary: The paper investigates the problem of manipulating the ranking of a target model on the anonymous voting platform Chatbot Arena. The authors begin by examining a straightforward approach called target-only rigging, which involves attempting to influence votes only in battles where the target model is present. They demonstrate the limited effectiveness of this method due to the low frequency of such battles. To overcome this limitation, the authors propose more efficient omnipresent rigging strategies. These strategies leverage the Elo rating system used by Chatbot Arena, showing that even votes in battles not involving the target model can impact its overall ranking. The authors conducted experiments by simulating attacks to showcase the performance of their techniques and discuss potential defense mechanisms against such attacks. ## updates after rebuttal I decided to remain the score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA Experimental Designs Or Analyses: Sounding. Supplementary Material: No Relation To Broader Scientific Literature: This paper contributes to the growing body of literature on the security and reliability of anonymous voting systems, particularly in the context of evaluating large language models (LLMs). The authors specifically address the Chatbot Arena platform, which is widely used as a benchmark for LLM performance. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper excels in providing a clear and well-defined framework for understanding the different scenarios involved in manipulating a target model's ranking. The authors effectively categorize these scenarios based on factors such as the anonymity of the models and the availability of historical data. The detailed and intuitive presentation of the concepts makes the paper easy to follow and understand. The paper's findings are significant as they highlight a potential vulnerability in a widely used benchmark for evaluating LLMs. By demonstrating the effectiveness of even relatively small-scale vote rigging attacks, the paper raises important questions about the trustworthiness of anonymous voting platforms and underscores the need for robust defense mechanisms. Weaknesses: While the authors present compelling simulations based on historical data from Chatbot Arena, the absence of actual, live experiments on the platform could be considered a limitation. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your very positive and supportive review. Below we respond to the comments in **Weaknesses (W)**. --- ***W1: Absence of actual, live experiments on the platform.*** We acknowledge that we did not conduct live experiments on the platform, primarily due to ethical considerations. Instead, we opted for practical simulations, which offer more reproducible results and can better support future research in further investigating rigging vulnerabilities. In fact, as mentioned in the *Remark* paragraph (Lines 110–111), we proactively reached out to the authors of Chatbot Arena in September 2024 to share our preliminary findings on rigging vulnerabilities. They also recommended conducting simulations based on historical data. Below, we provide detailed explanations: - **Rigging the actual platform raises ethical concerns**: As one of the most widely used LLM benchmarks nowadays, submitting rigged votes to Chatbot Arena would compromise the platform’s integrity and unfairly impact model developers whose models may be pushed down in the rankings due to manipulated results. Our objective is to expose potential vulnerabilities in Chatbot Arena and encourage the community to develop robust defenses against such attacks. To this end, we provide practical simulations that closely mirror the real platform, ensuring reproducibility without interfering with the actual leaderboard or affecting real voting data. - **Vote rigging can be conducted in practice**: While real-world platforms may employ standard defenses such as Cloudflare and CAPTCHA, practical attackers can leverage well-established techniques, such as IP rotation, Slowloris attacks, and CAPTCHA farms, within rigging scripts to bypass many of the traditional network- and application-level protections. These capabilities *effectively narrow the gap between our simulated experiments and real-world vote rigging*, indicating that our methods are applicable in practical scenarios.
Summary: The paper presents a method to manipulate the chatbot arena leaderboard platform, demonstrating the conceptual ability to alter a target model's ranking through strategic voting. The authors propose three rigging schemes: The first is target-specific, involving voting for the target model whenever it appears in a pairwise comparison. The other two schemes exploit the scoring dynamics of the Bradley-Terry model (with and without historical voting data) to modify the target model's score by strategically voting for other candidates. ## Update after rebuttal: Considering the novelty and technical contribution of the paper, I have decided to keep my score. Claims And Evidence: Claims made in the submission supported by clear and convincing evidence Methods And Evaluation Criteria: Proposed methods make sense for the application at hand. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: All the experiments in the main body of the paper appear valid. Supplementary Material: yes, all of it. Relation To Broader Scientific Literature: The methods proposed by the authors specifically demonstrate how the ranking of a target model can be altered on the Chatbot Arena platform. The main contribution of the paper concerns voting and manipulating voting scores. More broadly, this paper addresses the scoring of LLMs, focusing on model identification, evaluation, and relative ranking. Essential References Not Discussed: The key contribution is a method to manipulate the Bradley-Terry model as a scoring system. However, there is a lack of a literature review on the vulnerabilities of scoring systems, such as their susceptibility to strategic voting, the addition of new candidates, and similar issues. Given that the Bradley-Terry model is well-established, there should be known attacks that are effective against it. Other Strengths And Weaknesses: Strengths: 1) The paper is well-written, clear, and concise. 2) It demonstrates a relevant weakness in a well-known system. 3) It bridges the problem of social voting (and its vulnerabilities) with the current need to score LLMs. Weaknesses: 1) There is a missing literature review related to social voting and its vulnerabilities. 2) The novelty of the proposed rigging strategies is unclear: a. The approach is based on strategic voting, but it is unclear whether the methods are already known from previous papers discussing vulnerabilities in social voting and scoring systems (e.g., specifically for the Bradley-Terry model or other generalized models). b. If the methods are indeed novel, it is uncertain whether they are relevant to other voting and scoring models. 3) There are multiple defense mechanisms in place, such as Cloudflare, CAPTCHA, and user authentication (some of which are discussed, and some are not). Therefore, it is unclear whether the proposed methods can be applied in practice outside of a simulated environment. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)**. --- ***W1: (Essential References Not Discussed) Missing literature review of social voting and its vulnerabilities.*** We appreciate your valuable suggestions, which inspired us to explore earlier works on social voting systems and their vulnerabilities. We identify several relevant studies, for example, [1] analyzes the vulnerability of preference aggregation to strategic manipulation; [2, 3, 4] systematically introduce various voting systems and their susceptibility to manipulation; [5] examines vulnerabilities in political elections; [6] discusses the threat of group manipulation; and [7, 8] propose defense mechanisms to enhance the trustworthiness of voting systems. Following your suggestion, we will incorporate detailed discussions of these works in the paper revision. --- ***W2 (a): Concerns on method novelty; whether they are known in previous papers.*** Following your valuable suggestions in $\\textrm{\\color{green}W1}$, we conduct a detailed survey on social voting systems and their vulnerabilities. While the referenced papers focus on various forms of strategic manipulation, our work introduces several unique challenges in the problem setting and methodological innovations. These are discussed in detail in $\\textrm{\\color{blue}Section 3}$ (page 3), and we briefly recap them as follows: - *The de-anonymizing function* $\\mathcal{A}$: Unlike previous manipulation settings [2], where voting candidates are visible to voters, the sampled models in our case are initially **anonymous** before any rigging occurs. To address this challenge, we design and implement several effective model identification strategies, denoted as $\\mathcal{A}$. - *Omnipresent manipulation function* $\\mathcal{M}_{\\textrm{omni}}$: In addition to improving the model ranking, we also emphasize **rigging efficiency** in practice. To this end, we design two general-purpose rigging objectives ($\\mathcal{R^{\\textrm{BT}}}$ and $\\mathcal{R^{\\textrm{On}}}$), which significantly outperform baselines and two most relevant prior works in terms of efficiency. --- ***W2 (b): Concerns on whether the rigging methods are relevant to other voting and scoring models.*** Our rigging methods can be applied to other voting/scoring models, such as those using online Elo scores. To demonstrate this, we conduct simulations on a leaderboard updated with online Elo scores. Results in **Table A** show that both methods (Omni-BT and Omni-On) effectively improve the target model $m\_t$’s ranking by rigging 1,000 new votes. Since the Chatbot Arena adopts the Bradley-Terry model for scoring, we primarily report results based on this model in the main paper to better align with the practical setting. **Table A: Rigging simulation on the leaderboard with online Elo scores. The results are absolute ranking (ranking increase).** ||Llama-2-13B-Chat|Mistral-7B-Instruct|Qwen1.5-14B-Chat|Vicuna-7B|Gemma-2-9B-it|Phi-3-small-8k-Instruct| |-|-|-|-|-|-|-| |*Omni-BT*|88 (+4)|74 (+13)|54 (+14)|108 (+5)|87 (+15)|73 (+16)| |*Omni-On*|78 (+14)|65 (+22)|46 (+22)|97 (+16)|81 (+21)|67 (+22)| --- ***W3: Whether rigging strategies are applicable under practical defenses.*** Our method is applicable under practical defenses. First, since our rigging mechanism operates by strategically selecting voting options and submitting votes just like normal users, network-level defenses (e.g., firewalls and DDoS mitigation) and application-level defenses (e.g., bot detection via Cloudflare or CAPTCHA) cannot directly detect or distinguish the rigged votes. Additionally, practical attackers can incorporate techniques such as IP rotation, Slowloris attacks, and CAPTCHA farms into their rigging scripts to bypass many of Cloudflare’s and CAPTCHA’s traditional network- and application-level defenses. This *effectively bridges the gap between our experimental simulations and real-world vote rigging*, indicating the practical applicability of rigging methods. More importantly, we chose not to attack the actual platform due to ethical considerations, as we do not wish to contaminate Chatbot Arena’s voting data. Instead, our simulations offer reproducible results that serve academic purposes and can support future efforts to design more robust and trustworthy voting-based LLM leaderboards. --- ***References:*** \ [1] Manipulation of Voting Schemes: a General Result \ [2] Voting Systems and Strategic Manipulation: an Experimental Study \ [3] The vulnerability of point-voting schemes to preference variation and strategic manipulation \ [4] Methods of voting system and manipulation of voting \ [5] The Manipulation of Voting Systems \ [6] On the safety of group manipulation \ [7] Using Information Theory to Improve the Robustness of Trust Systems \ [8] Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses --- Rebuttal Comment 1.1: Comment: Since de-anonymization is not part of this paper's contribution, the main contribution lies in exploiting the ChatBot Arena platform under the assumption that we know the models with high certainty. Given this setting, it is not surprising that this voting system is vulnerable. However, I firmly believe this paper benefits the LLM-related community by raising awareness and demonstrating the feasibility of exploitation. I believe the score is appropriate, considering its novelty and given the addition of the literature review. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for recognizing the novelty and value of our work, as well as its benefit to the LLM community. That said, we would like to respectfully clarify the differing views regarding the scope of our contributions, particularly around the role of de-anonymization in our framework. --- **Regarding the comment: “…under the assumption that we know the models with high certainty.’’** We would like to clarify that $\\textrm{\\color{blue}our approach \\emph{does not} assume the model identities are known}$. In our experiments, the identities of the sampled model pairs are *anonymized* in each battle. To address this, we introduce a model classifier $\\mathcal{A}$ (i.e., the de-anonymization function) to infer model identities. While prior work has explored distinguishing between AI- and human-generated text, our classifier-based $\\mathcal{A}$, which discriminates among outputs of different LLMs, is rarely studied. To our knowledge, the most relevant work appears in a later-released study [1], whereas our method extends to a broader set of models for classification. We also consider more practical scenarios where model identities are inferred with *low certainty*, i.e., when the model classifier $\\mathcal{A}$ has limited accuracy. In Table 1 (page 6), we present experiments where only half of the models are correctly identified—leading to incorrect attribution in over half of the matchups. Despite this, our omnipresent rigging strategies remain effective in improving the target model’s ranking, demonstrating robustness even under uncertainty. [1] Idiosyncrasies in Large Language Models --- Once again, we appreciate your comments and will clarify our contributions in the final version.
null
null
null
null
null
null
Online Pre-Training for Offline-to-Online Reinforcement Learning
Accept (poster)
Summary: The paper proposes a novel offline-to online RL approach where, at the end of the offline phase, a second critic is trained in an "Online pre-training" phase and used in addition to the offline dataset during the final online learning phase. During online pre-training, the offline policy and critic are frozen, and the new critic is trained from the frozen offline policy, filling a new online buffer. By leveraging both the offline critic and the pretrained online critic, offline-to online works better than with former approaches. The algorithm outperforms many state-of-the-art baselines in many environments. ## update after rebuttal: I was in favor of accepting the paper, after reading all the reviews and exchanges with the authors, I'm still so. Claims And Evidence: The paper benefits from a strong methodology and the authors provide satisfactory evidence for most of their claims. One exception that I found is in Figure 6 (ablation of the new value function) where the authors should provide information on the variability of the results and use an appropriate statistical test to check that there is a statistically significant difference between the full method and its ablation. Another one is the study of the sensitivity of the algorithm to \kappa, which is far from clear (see comments below). Methods And Evaluation Criteria: Most experimental design details are satisfactory. Theoretical Claims: Does not apply (no theorem in this paper) Experimental Designs Or Analyses: As stated above, I'm OK with all experimental design details Supplementary Material: I've read it all. Relation To Broader Scientific Literature: The paper is clearly positioned with respect to the relevant literature. Essential References Not Discussed: I could not find an essential missing reference The authors should definitely read the WSRL paper: Zhou, Z., Peng, A., Li, Q., Levine, S., & Kumar, A. (2024). Efficient online reinforcement learning fine-tuning need not retain offline data. arXiv preprint arXiv:2412.07762. which has an alternative approach to the same problem, but they cannot be blamed for not doing so as this can be considered "concurrent work" (out less than 4 months ago). Other Strengths And Weaknesses: Strengths: - this is a solid paper, with a strong methodology (5 seeds, hyper-param studies, ablations, etc.) and a good contribution to the domain - the method is quite simple though clever, and performs well - beyond SOTA results, the additional studies in Section 5 are of interest Weaknesses: - writing could be improved here and there Other Comments Or Suggestions: Clarity and writing concerns: - 60% of the abstract says known things in the beginning, the authors could come quicker to the point of their paper. - italic "counter-intuitive trends" is a poor short name for the phenomenon the authors focus on, as it is too general. There are many counter-intuitive trends in many domains, find something more specific to the context of this paper - the "Background and related work" section is quite messy. This could be reorganized into two more clearly organized sections. - Fig. 2 could probably be smaller, if the authors need more space in the main paper - Section 4.1 should refer to Appendix H for the learning curves, close to the tables. - If possible, Fig. 4 should move to p. 7. - In Fig. 5, medium-replay and medium could be swapped, to be in the same order as the text mentions them. The figure is not much readable. - There is an issue with explanations about \kappa defered to appendices C.2 and D: in Appendix C.2, the authors seem to compare a linear scheduling to another linear scheduling, the point they want to make is very unclear. And I believe all explanations about \kappa in Appendix D should move together with Appendix C.2. - in 5.4 and appendices, all 25000 and 50000 should be rewritten 25K and 50K so that we see that it corresponds to the 25K online pre-training duration mentioned earlier Typos and similar concerns: - the authors use a lot the "'s" form, even for things that are not persons: "dataset's quality", "algorithm's code", etc.They shouldn't. - the year is often missing in references, this seems to be done on purpose but I consider this as a bad practice. - p. 5: ACA ... which pre-process ->processes - 5.1 Comparison OPT... -> Comparing - p. 7 "In particular, the results on the random dataset, where policy evolves more drastically as shown in Figure 5, the results demonstrate that an overfitted Qon-pt fails to learn with this policy improvement." -> fix this sentence! - OPT is missing in the legend of Fig. 7 - caption Fig. 7 doamin -> domain - A.1: TD3, RLPD -> TD3 and RLPD ... are based on its -> their - not dots after an equation when the sentence is not finished ("... where...") - use "\ eqref" rather than "\ ref" to refer to an equation. - B.3 has a different objective(s) - F: bayesian -> Bayesian Questions For Authors: - Table 2, first row, umaze is umaze-play, right? - In Figure 3, did the authors use a 300K + 25K + 275K protocol too? Are the first 300K steps shown? The transitions between offline, online pretraining and online should appear clearly, this is also the case of learning curves in Appendix H. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer for the valuable and constructive feedback. We respond to each point in detail below and will reflect the suggested improvements in the revised manuscript. &nbsp; ### **R4-1. Questions** - Question about umaze dataset According to the official D4RL[4-1] paper, the Antmaze environment consists of three types of datasets: umaze, diverse, and play. Among them, the ”play” dataset is collected by commanding the ant to navigate from various hand-picked start locations to hand-picked goals. In contrast, the *umaze* dataset, which appears in the first row of Table 2, uses a single fixed start and goal location without diversity in either. Since this setup does not match the definition of a *play* dataset, it is not categorized as one. We will clarify this point in the revised manuscript in Appendix B.2. - Question about training protocol All experiments in our paper follow a training protocol consisting of a 1M offline phase and a 300k online phase. For OPT, the first 25k steps of the online phase are used for online pre-training, while the remaining steps are allocated for online fine-tuning. This setup is also applied in Figure 3. Since the key changes in OPT occur during the online phase, Figure 3 and Appendix H present performance curves for the online phase. This is reflected in the initially flat performance observed in the graphs, as OPT does not update the policy during online pre-training. To avoid any confusion, we will explicitly clarify this in the caption of Figure 3 and in Appendix H. &nbsp; ### **R4-2. Sensitivity to $\kappa$** To provide clearer evidence for the sensitivity of our method, we conduct additional experiments in the Antmaze domain. Table F at the anonymous link (https://sites.google.com/view/icml2025opt/) highlights two key findings: - First, the comparison with fixed $\kappa$ values demonstrates the necessity of scheduling $\kappa$ during training. - Second, the results indicate that performance is not sensitive on a specific $\kappa$ value; rather, the crucial factor is the gradual transition from $Q^{\text{off-pt}}$ to $Q^{\text{on-pt}}$ These findings validate both the importance of $\kappa$ scheduling and the robustness of the approach to sensitivity in its exact values. &nbsp; ### **R4-3. Regarding Writing and Reference** We sincerely thank the reviewer for the detailed and thoughtful feedback on the writing. We appreciate the time and care taken to point out areas for improvement, as well as for providing helpful references. All suggestions will be carefully considered and reflected in the revised manuscripts, and we believe these revisions will improve the clarity and overall quality of the paper. &nbsp; ### **R4-4. Regarding Figure 6** We thank the reviewer for pointing out the insufficient information provided in Figure 6 regarding variability and statistical significance. To provide further clarity, we include environment-specific results corresponding to Figure 6 in Table G in the anonymous link (https://sites.google.com/view/icml2025opt/). These results show that performance degradation is especially noticeable in the random dataset. We will include these additional results in the revised manuscript. &nbsp; Once again, we greatly appreciate the reviewer's thoughtful comments and suggestions. The feedback has been instrumental in improving the quality and clarity of our work. We hope our responses sufficiently address the concerns raised. &nbsp; [4-1] Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning." arXiv (2020).
Summary: This paper presents OPT, a method to improve value estimation in RL. OPT follows three phases: offline pre-training, an "online pretraining" phase to train a separate value function, and online fine-tuning that combines both value functions. Unlike traditional methods that use a single value function, OPT introduces a second one trained with offline data and early online samples for better performance. Claims And Evidence: Several claims lack sufficient statistical significance: 1. The claim of "average 30% improvement" seems dubious when considering standard deviations, which often overlap significantly with baselines. 2. Claims of superiority over Cal-QL (lines 308-314) aren't statistically justified given overlapping standard deviations in Table 2. 3. Performance comparisons in Tables 1-3 highlight mean values without proper statistical significance testing, making it difficult to assess the consistency of improvements (this also includes Table 11 where authors claim superiority over BOORL without even reporting stdev). 4. The conclusion that OPT works well across backbone algorithms is supported by empirical results, but the improvements for IQL (Table 10) show substantial overlaps in standard deviations, weakening this claim. Methods And Evaluation Criteria: - The evaluation uses appropriate benchmarks (Mujoco, D4RL benchmarks) that are standard in the field. - However, the meta-learning objective from OEMA appears unnecessarily complex without clear justification for why simpler approaches wouldn't work. The paper lacks ablations comparing this meta-learning approach to simpler alternatives. Theoretical Claims: n/a Experimental Designs Or Analyses: As I wrote in the "claims" section above, several issues with experimental design and analysis undermine the paper's conclusions: 1. Statistical reporting is inconsistent – standard deviations are missing entirely from Table 11 and Figure 4,6, making it impossible to assess the significance of comparisons. 2. The paper lacks an ablation study on whether Equation 4 (weighted combination of Q-functions) is necessary, given that κ scheduling was shown to have minimal impact (like on halfcheetah env). Overall, I feel that there is a lot of overlap between the proposed method and other baselines if we factor in standard deviations, suggesting the improvements may not be statistically significant. And the selective reporting, i.e. highlighting only the mean values without statistical tests can create a misleading impression of consistent superiority. Therefore, empirically this paper does not seem sound to me. Supplementary Material: Yes - Appendix Relation To Broader Scientific Literature: The problem statement is relevant to broader literature. Essential References Not Discussed: - Other Strengths And Weaknesses: ## Strengths: 1. Novel approach to offline-to-online RL that addresses a fundamental limitation (inaccurate value estimation) 2. Extensive evaluation across multiple domains (MuJoCo, Antmaze, Adroit) 3. Good ablation studies on components like initialization methods and sample sizes 4. Paper is easy to follow ## Weaknesses: 1. Lack of theoretical justification for using two separate value functions (minor) 2. Insufficient comparison with simpler alternatives to the proposed complex approach (i.e. why do we need to use OEMA in equation 3?) 3. Selective reporting of results that emphasizes means over statistical significance and significant overlap with baselines Other Comments Or Suggestions: Typos: line 165: "as" instead of "at"; line 167: "B_off." instead of "B_off,") Questions For Authors: 1. How much additional computational cost is incurred when using OPT vs a baseline like TD3 or IQL? 2. Could you provide results using the rliable library [1]? The analysis can be made stronger with aggregated statistics like IQM rather than simply using means+stdev. 3. Why is the meta-adaptation objective from OEMA necessary? Have you compared with simpler approaches for training the new value function? 4. Can you provide "aggregated" learning curves with standard deviations for all baseline methods in Tables 1 and 2, similar to Figure 3, to allow for fair assessment of performance differences? (I was hoping for an aggregate curve in Appendix but could only find per-environment curves in Figure 8) [1]: Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., & Bellemare, M. (2021). Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems, 34, 29304-29320. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback, which helps improve both the clarity and completeness of our analysis. We address each point below. &nbsp; ### **R3-1. Regarding Statistical Significance** To reduce statistical uncertainty, we increase the number of random seeds from 5 to 10 for all baselines for Tables 1-3 and 10, and report 95% confidence intervals (CI) instead of standard deviation to better reflect the reliability of the results. Tables A-C at the anonymous link (https://sites.google.com/view/icml2025opt/) show that OPT consistently outperforms baselines, with minimal CI overlap, indicating statistical significance. Table E further supports this trend when combined with IQL, demonstrating robustness across backbone algorithms. For clarity, we bold entries whose CIs include the highest mean value among compared methods. We will include these results in the revised manuscript. &nbsp; ### **R3-2. Regarding Aggregated Statistics** We appreciate the suggestion to use aggregated metrics such as the Interquartile Mean (IQM) and include corresponding comparisons. Figure D at the anonymous link (https://sites.google.com/view/icml2025opt/) shows our method consistently achieves the strongest performance. We also provide aggregated learning curves. As shown in Figure E, our method demonstrates consistent improvement, especially strong result in the Adroit domain. These results support the statistical significance and overall effectiveness of our approach. We will include them in the revised manuscript. &nbsp; ### **R3-3. OPT with Simple Alternatives** As described in Section 3.1, Online Pre-Training initializes the new value function to enable effective online fine-tuning. To achieve this, we proposed a meta-learning strategy, formalized in Equation 3. The second term in Equation 3 updates the value function using online samples while incorporating gradients from $\mathcal{L}^{\text{off}}$. This aligns two terms, allowing $Q^{\text{on-pt}}$ to generalize across offline and online data. To assess its usefulness, we compare it with a simpler alternative that jointly trains on $B_{\text{off}}$ and $B_{\text{on}}$, as follows: $\mathcal{L}_{Q^{\text{on-pt}}}^{\text{pretrain}}= \mathcal{L}^{\text{off}}(\psi) + \mathcal{L}^{\text{on}}(\psi)$ This experiment is shown in Table D of our anonymous link (https://sites.google.com/view/icml2025opt/). The variant **Pre-trained with $B_{\text{off}}$ and $B_{\text{on}}$** shows a performance drop in harder tasks e.g. large mazes, due to conflicting learning dynamics between two terms, which the simpler method cannot resolve. In contrast, our method reconciles the two objectives, enabling effective online fine-tuning. These results highlight the value of the meta-adapation objective under distribution shift. We consider this a key contribution and will include this discussion in the revised manuscript. &nbsp; ### **R3-4. Ablation Study for Equation 4** We conducted experiments in Antmaze to assess the necessity of Eq. 4. Table F at the anonymous link (https://sites.google.com/view/icml2025opt/) compares $\kappa$ scheduling with fixed alternatives. While fixed $\kappa$ works in simple tasks, performance degrades in harder environments. In particular, $\kappa=0.5$ causes instability due to prolonged reliance on $Q^{\text{off-pt}}$. We also observe that performance is not sensitive to a specific $\kappa$ value, suggesting the key factor is a gradual shift from $Q^{\text{off-pt}}$ to $Q^{\text{on-pt}}$. These results underscore the importance of Equation 4 for stable and effective training. &nbsp; ### **R3-5. Computational Cost** We compare the wall-clock time of TD3 and our method. As shown in Figure F at the anonymous link (https://sites.google.com/view/icml2025opt/), TD3 takes about 4000 seconds, while ours takes around 6000 seconds due to the added value function and Online Pre-Training ($N_\tau = 25k$, $N_{\text{pretrain}}=50k$). Although this introduces extra overhead, we believe the performance gains reasonably justify the additional cost. We will include this analysis in the appendix. &nbsp; ### **R3-6. Regarding Theoretical Justification** Our work aims to empirically address challenges in offline-to-online RL. Although it does not include theoretical justification, the results support the effectiveness of our design. We recognize the value of theoretical analysis and consider it as a promising future direction. &nbsp; ### **R3-7. Regarding Missing Standard Deviation** We apologize for omitting standard deviations in Table 11 and Figures 4 and 6. We will include them in the revised manuscript for proper assessment. &nbsp; We appreciate the reviewer again for the valuable feedback and hope our response sufficiently addresses the concerns. We believe our method achieves statistically significant performance and is readily applicable to a wide range of algorithms. We hope this to be considered in the reviewer’s reevaluation of our work. --- Rebuttal Comment 1.1: Comment: Thank you for answering my concerns and reporting the scores using 95% CIs. Looking at Table A and Figure E in the anonymous link shared, it is evident that the method does not provide huge benefits over other baselines in Mujoco, but does help in Adroit and Antmaze (slightly). Therefore, I am raising my score to 3: Weak Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful follow-up and for reconsidering the score. We sincerely appreciate your careful evaluation based on the additional results, as well as your acknowledgment of the method’s improvements in Adroit and Antmaze. Your feedback has been very helpful in improving our work, and we are grateful for your time and constructive comments.
Summary: The paper studies offline-to-online RL problem and propose a new method called online pretraining. The mains to solve the sub-optimality problem brought by the offline Q value function during online fine-tuning. Specifically, the method propose to freeze the offline policy at the beginning of the online fine-tuning stage, collect online data and initialize the online Q value function with the combination of online and offline data with the OMEGA algorithm. During the rest of the online stage, the proposed method trains the policy with the combination of both the online Q value function and offline Q value function. The paper performs extensive empirical study on the effectiveness of the proposed algorithm. Claims And Evidence: The paper makes claims on the strong performance of the proposed algorithm and it is validated by experiments on the relevant benchmarks over dense and sparse reward environments, and compared with a comprehensive set of baselines. The only point that might seem unfair is that the proposed method used different backbone algorithms for different tasks. As for the effectiveness of each component of the proposed method, the paper also performs good ablation experiments on different initialization of the online Q functions. However, it would also be interesting to see if different initialization affects the asymptotic performance of the algorithm. The decaying coefficient $\kappa$ seems like a very intuitive idea and the paper attempts to provide an analysis on this design choice as well. However, I do not believe that the current evidence makes sense because the offline Q function is also trained on the online data as well. Maybe one way is to show the difference between online/offline Q function to the optimal Q function along the training trajectory? Methods And Evaluation Criteria: As mentioned above, the paper already contains a very comprehensive set of evaluations. However, it will benefit the paper even further if the analysis experiments in Section 5 are also repeated on the whole set of environments. Theoretical Claims: n/a Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: It provides a flexible method for offline2online RL with strong performance, which benefits the community. Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: According to Table 4, it seems like that RLPD is actually the strongest baseline in general (by a large margin)? Thus it would be nice if it is included in the comparison in Table 1. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer for the detailed and insightful comments. The suggestions are highly valuable in helping us refine both the presentation and the depth of our analysis. We carefully address each point below. &nbsp; ### **R2-1. Regarding Asymptotic Performance of Figure 4** We appreciate the reviewer for raising this interesting point. To investigate the asymptotic performance for different initialization strategies, we provide learning curves in Figure B at the anonymous link (https://sites.google.com/view/icml2025opt/). As discussed in Section 5.1 of our paper, **Random Initialization** initially causes an early performance drop due to unstable value estimates, which in turn results in lower asymptotic performance. **Pre-trained with $B_{\text{on}}$** achieves performance comparable to OPT in medium and medium-replay datasets but shows weak asymptotic performance on the random dataset. These results further support the effectiveness of the initialization strategy employed in our method. &nbsp; ### **R2-2. Regarding of $\kappa$ Scheduling** The motivation behind the design is that during online fine-tuning, $Q^{\text{off-pt}}$ becomes less reliable due to distribution shift, resulting in inaccurate value estimation. To address this, we introduce $\kappa$ scheduling strategy that gradually shifts emphasis toward $Q^{\text{on-pt}}$, enabling more effective online fine-tuning. We empirically validate this approach through an experiment comparing $Q^{\text{off-pt}}$ and $Q^{\text{on-pt}}$ against an optimal value function during the online fine-tuning. The optimal value function is estimated by training a TD3 agent until it achieves near-optimal performance. For a fair evaluation, we use 10 fixed state-action pairs, where states are sampled from the initial state distribution, and actions are selected using the optimal policy. As shown in Figure C at the anonymous link (https://sites.google.com/view/icml2025opt/), $Q^{\text{off-pt}}$ exhibits higher estimation bias due to its inability to adapt under distribution shift. In contrast, $Q^{\text{on-pt}}$, aided by Online Pre-Training, effectively reduces estimation bias. These results highlight that a gradual shift from $Q^{\text{off-pt}}$ to $Q^{\text{on-pt}}$ serves as an effective mechanism for mitigating estimation bias during online fine-tuning. &nbsp; ### **R2-3. Analysis Experiments on Other Domains** We agree that extending the analysis experiments in Section 5 to all environments can improve the comprehensiveness of our study. We are currently conducting analysis experiments, and one preliminary result is shown in Table D at the anonymous link (https://sites.google.com/view/icml2025opt/), which exhibits patterns consistent with those in Figure 4 of the main paper. We plan to include results covering all environments and analyses in the revised manuscript. &nbsp; ### **R2-4. Regarding Different Backbone Algorithms** The reason for using different backbone algorithms is that no single algorithm consistently performs well across all domains. For example, Off2On[2-1] performs well in MuJoCo but is not evaluated on Antmaze, and our reproduction did not perform well there. Conversely, SPOT[2-2] performs well in Antmaze but underperforms in MuJoCo. As each algorithm tends to be specialized for certain benchmarks, we choose the most suitable backbone for each to better assess the effect of our method. This is possible because our method is designed to be flexible and readily applicable to a wide range of algorithms. We will clarify this in the revised manuscript to avoid potential confusion. &nbsp; ### **R2-5. Regarding RLPD** Our work focuses on methods in the offline-to-online RL framework, which includes a distinct offline phase. For this reason, the current manuscript does not include RLPD[2-3] in the main results, as it is primarily designed for online RL settings and does not involve a separate offline phase. However, to prevent confusion in baseline comparisons, we will include RLPD in Table 1 in the revised manuscript, along with a brief explanation clarifying its distinction. &nbsp; We sincerely thank the reviewer again for the constructive comments. The feedback helped us strengthen both the empirical evidence and the clarity of our manuscript. We hope our responses have sufficiently addressed the concerns and contributed to a clearer understanding of our method. &nbsp; [2-1] Lee, Seunghyun, et al. "Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble." CORL 2022. [2-2] Wu, Jialong, et al. "Supported policy optimization for offline reinforcement learning." NeurIPS 2024. [2-3] Ball, Philip J., et al. "Efficient online reinforcement learning with offline data." ICML 2023. --- Rebuttal Comment 1.1: Comment: I appreciate the author's efforts in the rebuttal and I think the new results and analysis further improves the quality of the paper. I will raise my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positve feedback and raising the score. We appreciate the reviewer's time and effort for carefully reviewing our paper and response. Thank you again.
Summary: The authors proposed a new offline-to-online RL method called Online Pre-Training (OPT), where a new phase, online pertaining, is added between offline pre-training and online fine-tuning to solve the inaccurate value estimation problem. OPT introduce a separate value function instead of directly continue the learning on the value function trained on offline dataset. Experiments are conducted on D4RL to confirm its superiority over other offline-to-online methods. Claims And Evidence: See questions. Methods And Evaluation Criteria: See questions. Theoretical Claims: See questions. Experimental Designs Or Analyses: See questions. Supplementary Material: The ablation study and experiment details are relatively sufficient and clear. Relation To Broader Scientific Literature: The idea of two separate Q function is innovative. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See questions. Other Comments Or Suggestions: N/A Questions For Authors: 1. The experiments results for TD3 in table 1 , Vanilla RLPD in table 4 and other initializations in table 7 seem to come with significant larger standard deviation than OPT, taking the standard deviation into consideration, OPT seems not convincingly outperform the baselines on a lot of the tasks. Would the authors consider increase the number of random seeds? 2. The authors conducted ablation study for the weighting coefficient K on different dataset. However, for a new task, B_init and B_final are not available until you run the fine-tuning, making it hard to choose k before analysis. It would be great if authors could give a more intuitive guide of balancing strategies according to the task complexity of other properties. 3. The main motivation that authors propose the extra online pre-training stage with separate online Q function is that there are previous empirical experiments in papers like ((Nakamoto et al. 2024) showing the smoothness of Q function learning when switching from offline to online thus the algorithms suffer from an initial unlearning effect. I think it’s would be a necessary extra experiments to show the combined Q as in (3) used in online fine-tuning indeed improved after using OPT. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. The comments help improve the clarity and completeness of our work. Below, we address each point in detail. &nbsp; ### **R1-1. Regarding Experimental Results** Following the reviewer’s suggestion, we increase the number of random seeds from 5 to 10 to strengthen the claim regarding the effectiveness of OPT. In Table A-C of the anonymous link (https://sites.google.com/view/icml2025opt/), we report the mean and 95% confidence interval (CI) rather than standard deviation to convey the reliability of the results. OPT consistently outperforms the baseline, and the minimal CI overlap supports the statistical significance of the results. We also provide Interquartile Mean (IQM) [1-1] comparisons in Figure D for each domain, which further confirm that our method achieves strong performance. Additionally, we will increase the number of random seeds for Tables 4 and 7 and include the corresponding updated results in the revised manuscript. &nbsp; ### **R1-2. Intuitive guide of balancing strategies** We appreciate the reviewer for pointing out this insightful consideration. Although we do not provide a theoretical analysis for selecting the weighting coefficient $\kappa$, we offer the following intuitive guideline based on our empirical observations: - If the dataset quality is moderate, we recommend setting $\kappa$ such that the emphasis on $Q^{\text{on-pt}}$ gradually increases during online fine-tuning. - Conversely, if the dataset quality is relatively low, it is beneficial to set $\kappa$ to utilize $Q^{\text{on-pt}}$ from the early stage of online fine-tuning. We will include this practical guideline in the revised manuscript to provide clearer guidance on applying our method across various tasks. &nbsp; ### **R1-3. Regarding Additional Experiments on Value Estimation** Since the motivation of our method is to improve value estimation during online fine-tuning, we conduct additional experiments to verify whether the combined value function benefits from OPT. To evaluate this, we compare the value estimation of TD3 and TD3+OPT in three environments where OPT shows significant performance gains: halfcheetah-medium-replay-v2, hopper-random-v2, and walker2d-medium-v2. As an approximation of the optimal value function, we train a TD3 agent with sufficient steps until it reaches near-optimal performance and use its value function as a reference. For a fair comparison, we sample 10 fixed state-action pairs, where states are drawn from the initial state distribution and actions are selected using the optimal policy. We then measure the value estimation bias of TD3 and TD3+OPT throughout online fine-tuning. Figure A of the anonymous link (https://sites.google.com/view/icml2025opt/) presents how the estimation bias evolves during online fine-tuning. These results indicate that TD3 exhibits noticeable esimation bias due to distribution shift, whereas our method reduces bias early in training by leveraging $Q^{\text{on-pt}}$ trained specifically to handle online samples. To further support this, we conduct an additional experiment using the same estimation bias metric, but this time comparing $Q^{\text{off-pt}}$ and $Q^{\text{on-pt}}$ individually. As shown in Figure C, $Q^{\text{on-pt}}$ reduces estimation bias more rapidly than $Q^{\text{off-pt}}$, indicating that the improvement observed above stems from the effective adaptation of $Q^{\text{on-pt}}$ to online samples. These findings confirm that our method enhances value estimation, leading to more stable and effective online fine-tuning. We will include this analysis in the revised manuscript to support our claims. &nbsp; We appreciate the reviewer's detailed and insightful feedback once again. The suggestions have meaningfully contributed to improving the clarity and rigor of our work. We hope that our responses have adequately addressed the concerns raised. &nbsp; [1-1] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." NeurIPS 2021.
null
null
null
null
null
null
Graph Inverse Style Transfer for Counterfactual Explainability
Accept (poster)
Summary: The authors introduce GIST their novel framework that generates counterfactual graph explanations. They leverage spectral style transfer to generate valid counterfactual explanations. Their architecture consists of two components: attention based node embeddings, and edge probabilities from the Gumbel-softmax trick. After designing their algorithm they implement it on several real-world and synthetic datasets. They use the data sets BBBP, BZR, ENZYMES, MSRC21, PROTEINS, BA-SHAPES, and COLORS-3. They conduct experiments with respect to several SOTA baselines and compute relevant metrics such as validity and fidelity. They also conduct an ablation study on the interpolation factor which adjusts the distance from the decision boundary. They show a remarkable improvement in both validity and fidelity. ## update after rebuttal Thanks for the authors' efforts in the rebuttal. I intend to keep my rating. Claims And Evidence: The claims in the paper are mostly correct. The claims are supported by theory and preliminary experiments that show the framework’s potentially ideal behavior. Methods And Evaluation Criteria: The chosen datasets are good choices for their work. However, the authors should consider also adding additional datasets such as NCI1, MUTAG etc as these are common datasets. The authors also have left out a baseline method in counterfactual explanation methods [1]. [1] Bajaj, Mohit, et al. "Robust counterfactual explanations on graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 5644-5655. Theoretical Claims: Briefly checked over theoretical claims and some proofs. Experimental Designs Or Analyses: Checked the design of the experimental setups and they are sound. Supplementary Material: I mostly reviewed the supplementary section and verified the claims. Relation To Broader Scientific Literature: The paper does miss a reference in counterfactual graph explanations see [1] [1] Bajaj, Mohit, et al. "Robust counterfactual explanations on graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 5644-5655. Essential References Not Discussed: paper [1] from Neurips 2021 is quite relevant and is neither referenced nor used as a baseline [1] Bajaj, Mohit, et al. "Robust counterfactual explanations on graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 5644-5655. Other Strengths And Weaknesses: The paper is well written, theoretically rigorous and well founded. Other Comments Or Suggestions: n/a Questions For Authors: See above issues Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for the effort made to review our paper, and for the nice score you chose to give it. Thank you for pointing out RCExplainer [2]. **W1: Missed Bajaj reference**: We were aware of the paper, and decided to reviewed RCExplainer again to see whether we were missing something or not. We cross-validated it with what described in [1] specifically in Table 2 (page 12). As per [1], we confirm that it is a heuristic + learned-based method. In our related work section, we specifically stated that we concentrate on only learning-based and generative methods, hence the choice of the methods we compared against. However, we see the value of RCExplainer with its multiple learnt linear decision boundaries and then the search over these boundaries to find robust explainers, and will include it in our related work at camera ready. Unfortunately, the official code doesn't run (*the authors load some Huawei third party python packages that aren't available anywhere*) even after debugging and trying to port it to the GRETEL framework to have the same evaluation pipeline. Then, we also tried to run an unofficial code repository at https://github.com/idea-iitd/gnn-x-bench/blob/main/source/rcexplainer.py (we are not the authors so anonymity isn't breached if you want to take a look), however, the code doesn't support BBBP, BZR, ENZYMES, MSRC21, and COLORS-3 datasets. It's fine that it doesn't support COLORS-3 since this is multiclass and RCExplainer only does binary classification. However, not supporting the other listed datasets hinders us to compare it against other SoTA and GIST. *I believe a good compromise here is to recognize RCExplainer's validity as a heuristic+learned counterfactual explainer and list it in our related work section and highlight the fact that we only treat pure learned-based approaches*. What do you think? **Q1: Where are MUTAG and NCI1?**: We excluded all those datasets from TUDataset (https://chrsmrrs.github.io/datasets/docs/datasets/) that do not have any node attributes. As per GNNs message passing mechanism, the nodes share their feature vectors with their neighbors, hence then having meaningful embeddings. Given a graph $G=(X,A)$, GIST overshoots to $G^e=(X^e,A^e)$ whose node features $X^e$ go through TransConv layers. If $X^e$ are missing, then the conv. layer doesn't produce anything meaningful to then estimate the edge probabilities (see Fig. 2). To surpass this hurdle, we added 7 features regarding centralities: node degree, betweeness, closeness, harmonic centrality, clustering coefficient, Katz centrality, Laplacian centrality. In this way, at least we have something interesting to work with and not rely only on the topology of the graphs. Here are the performance against SoTA in terms of validity and fidelity on 5-fold cross-validations where the oracle $\Phi$ is a 3-layer GCN with test accuracy of 86.8% for MUTAG. Unfortunately, even after hypeparam optimization was done on NCI1 with the introduced node features, any kind of GCN (with any layer) and UGFormer [3] with the hyperparameter search space introduced in the original paper do not reach more than 40% of accuracy in the test set. We ran experiments with these oracles for NCI1, however the fidelity of the explainers was negative, which suggests that the explainers are actually doing adversarial attacks rather than explanations on the oracle [1]. Hence, we decided to discard NCI1 and show only MUTAG. *We want to point out that these two datasets aren't suitable for benchmarking purposes since, again, message passing mechanisms in GNNs rely on node feature aggregations on the neighbors. These two datasets don't have node features, and we are a bit puzzled how SoTA methods used them to compare against each other.* | | Validity | Fidelity | |---|---|---| | CF$^2$ | 0.026$\pm$0.026 | 0.026$\pm$0.026 | | CF-GNNExp | 0.447$\pm$0.026 | 0.237$\pm$0.132 | | CLEAR | 0.921$\pm$0.026 | 0.395$\pm$0.079 | | iRand | 0.026$\pm$0.026 | 0.026$\pm$0.026 | | RSGG-CE | 0.947$\pm$0.000 | 0.737$\pm$0.158 | | GIST | **1.0$\pm$0.000** |**0.737$\pm$0.105**| [1] Prado-Romero et al. A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges. ACM CSUR 2024. [2] Bajaj et al. Robust counterfactual explanations on graph neural networks. NeurIPS'21 [3] Nguyen et al. Universal graph transformer self-attention networks.WWW'22
Summary: GIST introduces a backtracking approach for graph counterfactual explainability using spectral style transfer. Unlike forward perturbation methods, it refines graphs to preserve global style and local content. GIST achieved excellent results experimentally. Claims And Evidence: Please refer to Strengths And Weaknesses. Methods And Evaluation Criteria: Please refer to Strengths And Weaknesses. Theoretical Claims: Please refer to Strengths And Weaknesses. Experimental Designs Or Analyses: Please refer to Strengths And Weaknesses. Supplementary Material: Please refer to Strengths And Weaknesses. Relation To Broader Scientific Literature: Please refer to Strengths And Weaknesses. Essential References Not Discussed: Please refer to Strengths And Weaknesses. Other Strengths And Weaknesses: **Strengths** - The paper presents a thorough spectral analysis, mathematically proving important properties including spectral gap bounds and Frobenius norm differences, which helps ensure the generated counterfactuals maintain coherence and semantic validity. - GIST is evaluated across eight benchmark datasets, demonstrating its effectiveness in diverse graph settings. **Weaknesses** - The paper introduces an intermediary known graph G in the counterfactual generation process but does not explain how this graph is obtained or why it improves counterfactual accuracy. Since counterfactuals are based on assumptions without ground truth, the reliance on an intermediary known graph raises questions about its validity. - The paper lacks a clear definition of what constitutes a counterfactual in different graph contexts. While structural transformations (e.g., spectral properties) are well-explained, the paper does not specify how counterfactuals are defined in user networks. - The paper does not provide sufficient details on node embedding generation and how these embeddings change during counterfactual transformations. - The paper evaluates on eight datasets but does not use commonly used benchmarks from prior counterfactual studies, such as Community and IMDB-M used in CLEAR [1]. It will be benefit to explain why the selected datasets are appropriate for evaluating graph counterfactuals. [1] Ma, Jing, et al. "Clear: Generative counterfactual explanations on graphs." Advances in neural information processing systems 35 (2022): 25895-25907. Other Comments Or Suggestions: None Questions For Authors: Please check above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank you for the effort made to review GIST. It's unfortunate you didn't see its value during your original review. With the following, we tackle the weaknesses you mentioned, and hope to convince you of the paper's value. **W1: Intermediate graph $G^e$, and improved accuracy**: $G^e$ is obtained by overshooting the decision boundary of the oracle $\Phi$. This process is described in Sec. B and is referenced in Sec. 4. To reiterate, we take the dataset and divide the instances into sets containing of the same class according to $\Phi$. Then, for each input $G$ with label $y = \Phi(G)$, we take all the sets whose label isn't $y$, unify and shuffle them. Lastly, we pick the first instance from this shuffled set, and that's $G^e$. This simple yet effective mechanism guarantees the counterfactual begins from the correct region (i.e., $G^e$ is already on the other side of $\Phi$'s boundary w.r.t. the input $G$). *As a consequence, if we don't use the backtracking mechanism, and just return $G^e$ as the counterfactual, we'd have validity (aka explanation accuracy) equal to 1.* However, $G^e$ might be far from $G$, and we don't want that. That's why GIST walks back towards $G$ and minimizes the spectral difference (Fig 8 for Graph Edit Distance). By going back, GIST needs to learn not to recross $\Phi$'s boundary and get an invalid counterfactual produced, however this might happen due to poor boundary definition. We will acknowledge this limitation and make what mentioned above clearer at camera ready by allocating space via shrinking Sec. 5.3 as per **XATo**. **W3: Node embeddings**: I don't understand completely; I'll try my best here. Fig. 2 shows our architecture that takes in input a graph $G=(X \in \mathbb{R}^{n\times d},A)$ which is overshot to $G^e = (X^e \in \mathbb{R}^{n \times d},A^e)$. $X^e$ get then fed to the TransConv layers that project them to a latent space $\hat{X}^e \in \mathbb{R}^\ell$. The latent features are only used to estimate edge probabilities for the counterfactual candidate $G^*$. What I think you meant is whether the node embeddings in $\mathbb{R}^\ell$ are then injected on the nodes of the selected incident edges via the Gumbel + Bernoulli sampling. The short answer is "no". Thus, one can't trace how the embeddings are "transformed". However, a simple fix would do the trick: when we have $\hat{X}^e$, we can add a decoder network $h: \mathbb{R}^\ell → \mathbb{R}^d$ - e.g., learned GCN - to map the embeddings back to the input space, and train this network jointly with Eq. 12 by summing, for instance, $|X-g(\hat{X}^e)|_1$. In this way, we can see which embeddings contribute to the lowest difference (a desideratum of counterfactuality [1]) between $X$ and $g(\hat{X}^e)$. By doing this, we obtain a sparsity ↓ (Sec. E) as follows for org. GIST (up) and GIST with the embedding decoder (down). If you think this is valuable, we can add it at camera ready, in the appendix. | | AIDS | BAShapes | BBBP | BZR | COLORS-3 | ENZYMES | MSCR21 | PROTEINS | |---|---|---|---|---|---|---|---|---| | $\|X - X^e\|_1$ | 2.07 | .82 | .81 | .63 | 1.76 | .96 | .77 | 1.46 | | $\|X - g(\hat{X}^e)\|_1$ | 1.36 | .63 | .50 | .41 | .97 | .74 | .38 | .83 | **W4: Community & IMDB-M**: Community was synthetically ad-hoc generated in CLEAR. This is why we can't reproduce the same dataset as in that paper. So, we opted to choose TreeCycles and BAShapes to cover RSGG-CE [2] and CF-GNNExp. [3] which were already supported in the GRETEL framework [4]. The results for IMDB-M on 5-fold cross-validations are here. Note that we ran CLEAR from scratch and the reported validity (0.45) isn't the one reported in the original paper (0.96). GIST is the best in terms of validity and fidelity w.r.t. a 3-layer GCN whose test accuracy is 48%. Here, we are using the version of GIST without node embeddings for consistency with what reported for the other datasets in the paper (Sec. G.2). | | GIST | iRand | CF$^2$ | CLEAR | CF-GNNExp. | RSSG-CE | |---|:---:|:---:|:---:|:---:|:---:|:---:| |Validity|**0.87**|0| 0.71| 0.45| 0.67| 0.69| |Fidelity|**0.17**|-| 0.09| 0.05| 0.17| 0.09| iRand doesn't produce any valid counterfactuals, hence its fidelity cannot be measured. Also, all the methods have a low fidelity due to the oracle's "horrible" classification skills. We tried UGFormer [5], as the best SoTA in IMDB-M, however we got a test accuracy of 33% with the same hyperparams as in the original paper, instead of the reported 89.2% on paperswithcode. [1] Wachter et al. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. [2] Prado-Romero et al. Robust stochastic graph generator for counterfactual explanations.AAAI'24 [3] Lucic et al. Cf-gnnexplainer: Counterfactual explanations for graph neural networks.AISTATS'22 [4] Prado-Romero & Stilo. Gretel: Graph counterfactual explanation evaluation framework.CIKM'22 [5] Nguyen et al. Universal graph transformer self-attention networks.WWW'22
Summary: The authors present a new method for generating counterfactual explanations for Graph Neural Networks (GNNs) based on an adaptation of neural style transfer. They then establish some theoretical results for the well-foundedness of their approach before presenting their method to learn the style transfer objective. Finally, the authors benchmark their method on synthetic and real-world datasets against current baselines. Claims And Evidence: The claims presented are supported by clear and convincing evidence. Methods And Evaluation Criteria: The method, datasets and baselines makes sense for the problem studied. The baselines and datasets are also relevant. However, I am not convinced by the choice of metrics, as the comparison with the baselines does not seem entirely fair. You should include a measure of similarity, see [1]. [1] Counterfactual explanations and how to find them: literature review and benchmarking. Riccardo Guidotti, 2022. Theoretical Claims: I have checked the correctness of all the proofs of the lemmas and theorems in the paper. Experimental Designs Or Analyses: The experimental design is sound, and the analysis of part 5.2 and 5.3 is correct, although incomplete in my opinion, as a similarity metric is missing, which is key for counterfactual explanation (see essential references not discussed point). Supplementary Material: I have reviewed Appendix A to F. Relation To Broader Scientific Literature: The work seems interesting, although it is difficult to compare their method with the chosen baselines, since its focuses primarily on the Validity metric. Essential References Not Discussed: The similarity metrics was not used to assess the method versus the baselines proposed, as decribed in [1]. [1] Counterfactual explanations and how to find them: literature review and benchmarking. Riccardo Guidotti, 2022. Other Strengths And Weaknesses: Strengths: - The idea is interesting. - The writing is good. - The experiments are extensive and thorough. Weaknesses: - In my opinion, the modest theoretical insights do not really add any interpretability into the counterfactual explanations. It does, however, show that you can interpolate between two graphs. - A similarity metric should be used to compare the baselines, and ideally would be addressed in the objective. - Some weaknesses and limitations of the proposed method should be discussed in the paper. Other Comments Or Suggestions: - In part 4.1, why do you assume that the Laplacian matrices commute? I know in the proof of theorem 4.4 you separate the cases commuting/non-commuting, but reusing the notation in part 4.1 is confusing. - Put "BCE" in equation 6. - Part 5: Clarify that you are backtracking from a dataset counterfactual. - Part 5.3: in my opinion this part is too long. It echoes the basic interpolation results from 4.1, but it provides little in the characterization of counterfactual explanations. - The proof of Weyl's inequality in the appendix is unnecessary. - Overall, the proofs in Appendix A.1 should be much shorter (from 2 to 1/2 page), this is very basic linear algebra. - Appendix A.2 is also very slow for an obvious property, only step 5 is necessary. - Appendix A.3 : Another direct simple application of Weyl's inequality, this is also very slow. Note that step 1 is poorly formulated. - Appendix A.4 should be a one liner. - Overall, the theoretical insights claim appears weak. You are just interpolating between two graphs. The analysis proves that, indeed, your framework interpolates between two graphs in terms of eigenvalues and norm. How does it relate to counterfactual explanation? - Appendix 2: I don't understand equation (50). What is the minimum of a shuffled set? Also $Q$ is not define Questions For Authors: - definition 4.1: I am puzzled about your definition of style and content. How do you motivate using the Laplacian of graphs to define style? - definition 4.1: Why not take the style between $G^*$ and $G^\varepsilon$ and the content between $G^*$ and $G$? - Part 5.1 and Appendix C: for the parameters of the other explainers, wouldn't a fair comparison try to optimize their parameters for validity? - I am skeptical about your choice of metrics; would not $\alpha = 1$ achieve perfect validity and very good fidelity, as you would simply return the dataset counterfactual? Then what is the point of the analysis in 5.2? - Part 5, Figure 4: Why are there so many zeros in those graphs? Is this related to the difference in the sizes of $G$ and $G^\varepsilon$? - Part 5: Can you give some spectral analysis for $\alpha = 0.5$? This choice of parameter may make it harder for your method to perform according to the theoretical results. - Part 5: How is your method fairing compared to the baselines wrt a similarity metric (eg. GED for example)? - I think an interesting direction could be to look into incorporating the GNN prediction to adjust the alpha parameter, hence actually balancing the similarity and validity metric in your objective. Have you considered such improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Oh, this is awesome. Thanks for reading the paper and not using an LLM to generate your review :-) Your suggestions will better our paper. The proofs will be much shorter and clarifications on counterfactuality more thorough. Sec. 5.3 contains the two most important evaluation metrics as per [1]. We'll put the GED (now in Sec.G.1) in the main paper. **Just interpolation; no insights for counterfactuals**: While the literature advocates for "hard-core" generative models, GIST learns to backtrack from a dataset example $G^e$ toward $G$. We agree that the main contribution is interpolation, but, it's applicability is suitable for counterfactuality as shown in the experiments. Interpolating $G$ and $G^e$ guarantees $G^*$ is in-distribution, which promotes plausability [2]. We'll clarify this better at camera ready, and measure plausability. **Why Laplacian?**: Laplacians capture global structural patterns, e.g., connectivity and symmetry, that are largely invariant to specific node identities. This follows a similar rationale to neural style transfer in images, where Gram matrices of feature activations are used to model style since they encode correlation patterns among features rather than spatial arrangements. Additionally, using the Laplacian aligns with prior work in spectral graph theory, where the eigenvalues and eigenvectors of the Laplacian are shown to be robust descriptors of global structure, and have been used in graph matching [3] and generation [4]. **Eq. 50?**: $Q: \mathcal{G}\times \Phi→\mathcal{G}$ (we missed this). We put $\arg \min$ to emulate a for-loop on $U$. This means we take the first $G^e \in U$. In hindsight this is also confusing to us. We'll change it to $k^*=\min [ k\in[1,n]|\Phi(G)\neq \Phi(G_k) ]\; \forall G_k \in U$, and $G^e = G_{k^*}$ **Similarity metric, objective func and GED**: By using $G$ as a pulling factor governed by $\alpha$ to produce $G^*$, the similarity is already addressed in the objective although not explicitly as in the desideratum in [2]. In principle, the more $G^*$ goes toward $G$, the lower the edit distance should be, as well as the validity (Fig. 3). We do report GED (Fig. 8, Tab 5-12): GIST is better than CF-GNNExp (our main competitor in validity), though it's not the best across the board due to our choice of $\alpha = 0.9$. Lower $\alpha$ would improve similarity at the cost of validity, a trade-off we make explicit. **SoTA hyperparams**: We used the hyperparams reported in their original papers. We can optimize them for validity, but this would require months of optimizations. We can optimize them for one dataset (e.g., BAShapes) and use the same for the rest. Do you think this is a good compromise? **Sec. 5.2 and $\alpha=1$**: We aim to balance validity and similarity. Setting $\alpha=1$ would ignore the counterfactual signal, leading to perfect validity but poor GED, as $\mathcal{L}_{style}$​ becomes zero, and no learning (backtracking) is happening. Our formulation indirectly encourages similarity through $\alpha$, which controls how much $G$ influences $G^*$. The analysis in Sec. 5.2 is necessary to demonstrate how this trade-off affects GED, even if it's not an explicit loss term. **Fig. 4**: The zeros are indeed paddings to compute the spectral differences, which need the adj and degree matrices to be of same dimensions. To account for different graph sizes, we could use the Wasserstein distance of the eigenvalues instead of L1. This would need further theoretical investigation. We'll clarify this in the figure's caption. **Spectral analysis for $\alpha=0.5$**: Your intuition is right; when $\alpha=0.5$, GIST struggles between preserving content and matching style. We computed the frobenius norms (expected vs. produced) as in Fig. 6 for $\alpha=0.5$ on AIDS and get an error of 0.537, two orders higher than with $\alpha=0.9$. We also emulate Fig. 5 with $\alpha=0.5$ and obtain an error of 0.013 instead of 0.005 as with $\alpha=0.9$. Lastly, for Fig. 4 we have an error of 0.043 instead of $2.051\times10^{-3}$. Unfortunately, we can't show images here, but we'll include this analysis in the appendix, and mention it in a new limitation section. **Switch content and style**: Taking style from $G^e$ and content from $G$ would attempt to generate a graph structurally similar to a perturbed counterfactual but semantically identical to the original, which defeats the purpose of generating counterfactuals, since semanticity usually drives $\Phi$'s prediction. GIST is designed to minimally deviate from $G$ structurally, but still flip the class as in $G^e$. [1] Prado-Romero et al. A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges.CSUR'23 [2] Guidotti.Counterfactual explanations and how to find them: literature review and benchmarking. [3] Yan et al. A short survey of recent advances in graph matching.ICMR'16 [4] Dwivedi et al. Benchmarking Graph Neural Networks.JMLR'23
Summary: This paper introduces Graph Inverse Style Transfer (GIST), a novel framework for counterfactual explainability in graph neural networks (GNNs). Unlike traditional forward perturbation-based counterfactual methods, GIST employs a backtracking mechanism inspired by style transfer in vision. By first overshooting a decision boundary and then refining the counterfactual graph to align with the original graph’s spectral properties, GIST aims to generate semantically valid and structurally faithful counterfactuals. The method is evaluated on eight benchmark datasets, where it demonstrates an increase in counterfactual validity and improvement in fidelity compared to state-of-the-art approaches. ## update after rebuttal Thank the authors for response. Some of my concerns have been addressed, I intent to maintain my originial score. Claims And Evidence: Most claims in the submission are well-supported. Methods And Evaluation Criteria: Yes, the method and evaluation are in a decent design and suitable for the studied problem. But it would be beneficial to include more baselines (as this field has been well studied in the past a few years). Theoretical Claims: The provided theory makes sense at a high level. Experimental Designs Or Analyses: The experiments could be improved with more classical and SOTA baselines, and include more complex real-world graphs for in-depth evaluation. Adding further user case studies for qualitative assessment would also be beneficial. Supplementary Material: Yes, I reviewed the supplementary matrial. Relation To Broader Scientific Literature: The proposed work relates to general counterfactual explainability in GNNs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper introduces a novel and well-theorized counterfactual framework for GNNs. But there are some other potential concerns here to address: - While the experimental results show improvements over baselines, a more detailed analysis of why GIST outperforms existing methods—beyond just numerical results—would make the contributions clearer. - More specific user studies would be beneficial to include. Other Comments Or Suggestions: N/A Questions For Authors: - How scalable is GIST in large-scale real-world graphs? - How does GIST compare to causal-based counterfactual approaches rather than just perturbation-based baselines? - Have you considered potential applications in real-world case studies? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for the effort made to review our paper, and for the nice score you chose to give it. With the following, we hope to answer your questions, and convince you of the value of GIST. **W2: Specific user studies.** We want to point out that the scope of this paper was not to involve users (in fact, none of them are present in the paper) in assessing the goodness of the produced counterfactuals since graph counterfactuals can be quite complicated to illustrate, especially when many edges/nodes are being added/removed from the original instance (see **W1** for a proposal of qualitative analysis). **W1: Detailed analysis of GIST vs. SoTA beyond numerical results.** As per [3] - the most-up-to-date survey in graph counterfactuality - all methods, besides those designed specifically for molecule explanations where illustrations are possible and very clean visually, use the quantitative metrics we used (see Sec. E & G.2). However, we found in the literature (e.g., [1,2] seem to be consistent in styling) an interesting and visually-pleasing way to show the differences between the original instance and counterfactual produced by illustrating their adjacency matrices. We will extend this way of visualizing also for node feature perturbations since GIST supports this mechanism, differently from the abovecited papers. We believe this would add value to the quantitative measures we provided in the paper, and would appreciate your input in this regard. **Q1: How scalable is GIST in large-scale real-world graphs?** In Sec. F, we have a time complexity analysis that treats both densely- and sparsely-memorized graphs. To reiterate, since GIST needs to find an eigenvalue decomposition to compute the Laplacian, for dense graphs we have a complexity of $\mathcal{O}(n^3)$ where $n$ is the number of nodes in a graph, and for sparse graphs it amounts to $\mathcal{O}(k(n+m))$ where $m$ is the number of edges and $k$ is the number of eigenvalues to find. Since $k$ is a constant, then this amounts to $\mathcal{O}(n+m)$. Since our implementation uses sparsely-memorized graphs, and from Table 3 in Sec. D you can see that most graphs are sparse anyway ($m<<n^2$ on avg), the execution time is linear on the number of nodes $n$ and edges $m$. Thus, scaling GIST to huge graphs is not expensive, even in real-world graphs that are generally sparse. **Q2: GIST vs. causal-based explainers:** To the best of our knowledge, only CLEAR [4] is a causal-based explainer. We included it on our experiments and GIST consistently performs better across the board in validity (Tab. 1) and fidelity (Tab. 2) and counterfactual similarity with the input in terms of Graph Edit Distance (see Fig. 8). This is also because CLEAR, when producing a counterfactual candidate, it gives a fully-connected stochastic graph which then it needs to match to the input. Note, that graph matching is an NP-hard problem and there are some approximations which undermine CLEAR's performances. GIST, on the other hand, is very simple and understandable: *overshoot the oracle $\Phi$'s decision boundary and backtrack via graph transformers*. **Q3: Real-world applications.** One compelling real-world application of GIST is drug repurposing. Suppose we have a known drug A that effectively treats disease $d_1$. Our goal is to discover a new drug B that treats a different disease $d_2$, potentially by modifying A. However, directly identifying B is often difficult and financially expensive. GIST could help by first exploring compounds that are far from A - those that may not initially preserve A’s chemical properties but show efficacy against $d_2$ (albeit with more side effects). From there, GIST can iteratively "walk back" toward A, gradually optimizing for lower side effects while preserving therapeutic relevance to $d_2$. This path - from distant molecular candidates back toward A - could lead to a novel compound B that is both effective for $d_2$ and safer. We find this direction highly promising and plan to investigate it further. However, applying GIST to drug discovery requires close collaboration with chemists and pharmaceutical experts, as well as human-in-the-loop evaluation to ensure the physical plausibility of generated compounds. Such interdisciplinary work would also naturally address the need for domain-specific user studies, which is a point you raised in your original review. [1] Prado-Romero et al. Robust stochastic graph generator for counterfactual explanations. AAAI'24 (page 15-16, Figs. 10-11 of the suppl. material on arXiv) [2] Prenkaj et al. Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals. KDD'24 (Figs. 7-8) [3] Prado-Romero et al. A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges. ACM CSUR'24. [4] Ma, et al. CLEAR: Generative counterfactual explanations on graphs. NeurIPS'22.
null
null
null
null
null
null
Scalable Approximation Algorithms for $p$-Wasserstein Distance and Its Variants
Accept (poster)
Summary: This work introduces a method to compute a $O(\log n)$-approximation of the p-Wasserstein distance in $O(n^2 \log n)$ time for $p\ge 2$. The method is based on the construction of Hierarchically well Separated Trees (HSTs), and the Hungarian algorithm with Bichromatic Closest Pairs. Furthermore, the authors provide a method to approximate a variant of the p-Wasserstein distance which is robust to noise, named $(p,k)$-RPW, which can be computed in $O(n^2/\delta^2)$ for an additive error of $\delta$. This is an improvement compared to other methods such as entropic regularized OT computed with Sinkhorn which are computed in $O(n^2/\delta^p)$. ## Update after rebuttal I maintain my score as I believe that additional experiments (especially for RPW) and comparisons would strengthen this work. Claims And Evidence: The authors claim that they provide a method to compute a $O(\log n)$-approximation of the p-Wasserstein distance in $O(n^2 \log n)$ time for $p\ge 2$ (Theorem 1.1), and that they can approximate the $(p,k)$-RPW distance in $O(n^2/\delta^3)$ time for an additive error of $\delta$ (Theorem 1.2). Both claims are supported by proofs. More precisely, in Section 2, a distance is derived, and is shown to be a $O(\log n)$ approximation of the Wasserstein distance, and in Section 3, an algorithm with a complexity of $O(n^2 \log n)$ to compute the proposed distance is described. Finally, the algorithm to approximate the RPW distance is derived in Section 4. Methods And Evaluation Criteria: The methods proposed make sense. The evaluation criteria allow to verify the claims in practice. Nonetheless, I believe it would have been interesting to provide comparisons with known baselines, e.g. comparing the runtime of the method with the implementation of the OT problem in the library Python Optimal Transport. And likewise, it would have been nice to compare the RPW algorithm with e.g. Sinkhorn. Also, I am not sure the results of RPW are presented in Figure 1? Theoretical Claims: The theoretical claims seem good. The claims (see Claims and Evidence section) are supported through proofs in Section 2, 3, 4 and in Appendix. However, I did not check all the details. Experimental Designs Or Analyses: The experimental design seems good. They demonstrate the complexity of the first algorithm in Figure 1.b and 1.e, and the approximation in Figure 1.f. They also verify the impact of several parameters on the results. However, there are no comparisons with baselines which are usually used to compute the $p$-Wasserstein distance. Moreover, a practical study of the algorithm to approximate RPW seems to be lacking. Supplementary Material: I did not review the supplementary materials. Relation To Broader Scientific Literature: One of the key contribution is to use a set of $p$ Hierarchically well Separated Tree Metrics instead of one, which was done for $p=1$ in [1, 2]. Another key contribution is to show a method to provide an approximation of the $p$-RPW distance, introduced in [3]. [1] Moses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 380–388, 2002. [2] Jon Kleinberg and Eva Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and markov random fields. Journal of the ACM (JACM), 49(5):616–639, 2002. [3] Sharath Raghvendra, Pouyan Shirzadian, and Kaiyi Zhang. A new robust partial p-Wasserstein-based metric for comparing distributions. In 41st International Conference on Machine Learning, 2024. Essential References Not Discussed: All the essential references seem to be discussed. Other Strengths And Weaknesses: **Strengths** - Interesting method to compute an approximation of the $p$-Wasserstein distance in $O(n^2 \log n)$ time. - Also provide an interesting method to compute an approximation of the $p$-RPW distance. - Verify the theoretical results in practice. **Weaknesses** - The procedures proposed can be a bit hard to understand, as they are described in the text. Algorithms/Figures could maybe help the readers to better understand the methods. - The experimental verifications are done only with the procedure to approximate the $p$-Wasserstein distance (if I understand well). Moreover, there are no comparisons with baselines which are usually used to compute the $p$-Wasserstein distance. Other Comments Or Suggestions: I suggest improving readibility by adding algorithms or figures to help the reader understand how to implement the algorithm. It would also be nice to have experimental results for the computation of RPW, and comparisons with benchmark methods. For instance, the algorithm should be faster than the hungarian algorithm, but it does not appear on the experimental sections. Maybe I missed it, but I am not sure what "$an$" and "$an^{(3/2)}$" refer to in Figure 1.b and 1.d. Typos: - Line 189, 1st column: "distace" Questions For Authors: 1. Is your approximation algorithm faster than ot.emd in the library POT? 2. Did you implement the algorithm to compute RPW? 3. How does your algorithm introduced to compute RPW differ from the ones in [1]? [1] Sharath Raghvendra, Pouyan Shirzadian, and Kaiyi Zhang. A new robust partial p-Wasserstein-based metric for comparing distributions. In 41st International Conference on Machine Learning, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and constructive feedback. The two main contributions of our paper are as follows: * A $O(\log n)$ relative approximation algorithm for $p=1$ has been known for almost three decades and has had a significant impact. Despite significant effort, no such algorithm was known for $p=2$. We present the first $O(\log n)$ relative approximation algorithm for the $p$-Wasserstein distance, for any fixed value $p > 1$. * Under reasonable assumptions, we show that any algorithm that additively approximates $p$-Wasserstein distance requires $\Omega(n^2/\delta^p)$ time. This hardness also extends to other robust variants including $\lambda$-ROBOT and partial $p$-Wasserstein distance. In contrast, the $p$-RPW distance, due to its robustness, admits an $O(n^2/\delta^3)$ time algorithm for any value $p \ge 1$. We note that our contributions are significant from a theoretical standpoint. The primary reason to implement and test our relative approximation algorithm is to understand the gap between its worst-case theoretical guarantees and its practical performance. Having said that, we can include a comparison of our algorithm with the standard Hungarian algorithm as well as an implementation of our approximation algorithm of $p$-RPW with that of [1]. **Comparison with OT.emd:** There are two reasons for not making a direct comparison between our algorithm and OT.emd. * OT.emd requires $O(n^2)$ space which means we can only execute them on small instances. This restricts our ability to compare our algorithm with OT.emd and Sinkhorn on larger inputs and understand how their performance scales with input size. * Our prototype implementation is written in Python where as OT.emd is based on a highly optimized C++ implementation. Comparing the performance of codes written in different languages, especially only in small instances, can mislead us which is why we have avoided a direct comparison. ---- ---- >Maybe I missed it, but I am not sure what "$an$" and "$an^{(3/2)}$" refer to in Figure 1.b and 1.d. **Response:** The plot uses a $\log$–$\log$ scale, where any polynomial function appears as a straight line, with the slope indicating the polynomial’s exponent. In Plot 1.b, we included the functions $f(n) = an$ and $f(n) = an^{3/2}$ (for some constant $a$) to illustrate that, in our experiments, the running time of the Hungarian algorithm with our data structure is bounded by $O(n^{3/2})$. ---- ---- >Did you implement the algorithm to compute RPW? **Response:** We show that additive approximation requires $\Omega(n^2/\delta^p)$ time due to their sensitivity to noise and the robustness of $p$-RPW allows for significantly faster additive approximation with an execution time of $O(n^2/\delta^3)$ (for all values of $p$). Our intention was to highlight this gap, which can be accomplished without the need for an implementation. Nonetheless, we are in the process of implementing our algorithm and commit to comparing it with the algorithm of [1] in the next version of our paper. ---- ---- >How does your algorithm introduced to compute RPW differ from the ones in [1]? **Response:** The algorithm introduced in [1] computes the OT profile and uses it to approximate the $p$-RPW, which takes $O(n^2/\delta^p+ n/\delta^{2p})$ time. We use a guessing procedure along with an early stopping criteria to reduce the execution time to $O(n^2/\delta^3)$. More precisely, our algorithm picks a guess $g$ from one of the $O(1/\delta)$ values as an approximation of the true $p$-RPW distance. For this value, it executes only $O(p/\delta^2)$ iterations of the LMR algorithm [2]. It then returns the minimum estimate achieved across all the $O(1/\delta)$ guesses. In contrast, the algorithm in [1] runs the LMR algorithm [2] for $O(1/\delta^p)$ iterations. [1] S. Raghvendra, P. Shirzadian, and K. Zhang. "A New Robust Partial $p$-Wasserstein-Based Metric for Comparing Distributions." ICML 2024. [2] N. Lahn, D. Mulchandani, and S. Raghvendra. "A graph theoretic additive approximation of optimal transport." NeurIPS 2019. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and for clarifying the main contributions. I understand that it is mostly a theoretical work. The comparison with ot.emd might indeed be hard to compete with. A solution might be to reimplement it in python? For RPW, I still believe that a comparison of the implementations would really strengthen this work, even though the theoretical result is valuable on its own as underlined by the authors. --- Reply to Comment 1.1.1: Comment: Our work is primarily theoretical, and we have rigorously demonstrated its efficiency and correctness through detailed proofs. To ensure completeness, we will include additional comparisons that you have suggested. Specifically, we will compare the efficiency of our relative approximation algorithm against an exact solver, i.e., the Hungarian algorithm, and compare our RPW approximation with the algorithm proposed in [1]. We sincerely thank the reviewer for highlighting the exact solver OT.emd and its efficiency. OT.emd is a C++ implementation of the network simplex algorithm. This implementation is detailed in [2]. A valuable direction for future research would involve comparing the trade-off between accuracy and efficiency across various C++ implementations of OT approximations. However, this analysis is beyond the scope of our current paper. [1] S. Raghvendra, P. Shirzadian, and K. Zhang. "A New Robust Partial $p$-Wasserstein-Based Metric for Comparing Distributions." ICML 2024. [2] N. Bonneel, M. Van De Panne, S. Paris, and W. Heidrich. "Displacement interpolation using Lagrangian mass transport". In ACM Transactions on Graphics (TOG), 2011.
Summary: This paper aims to provide an $O(\log n)$ approximation algorithm for p-Wasserstein distance that runs in $O(n^2 \log n \log U \log \Delta)$ time. This is done with a collection of $p$ HST trees and with a dynamic BCP data structure to efficiently find augmenting paths in the dual framework for the OT problem. The authors also present an algorithm for an additive approximation of a noise-resistant variant of $p$-Wasserstein called $(p,k)-RPW$. Their approach to this borrows ideas from the work of [Lahn et al. 19], scaling the problem to an integer one and using the [Gabow and Tarjan 89] algorithm. Some simple experiments demonstrate the scaling behavior that is expected, with also evidence that the average-case performance for the first method is between $n$ and $n^{3/2}$. ## Update after rebuttal: I felt sufficiently satisfied with the explanations for my concerns, so I've updated my score to accept. Claims And Evidence: This is mostly a theoretical paper, so its evidence is mostly in the proofs. As noted in the section below, I was able to check Section 2 and Appendix A carefully, which included: * a O(log n) in expectation approximation of the true p-Wasserstein distance with p HST trees * a description of a data structure that allows for efficient construction, retrieval, and insertion/deletion The proofs were correct. I was not able to check Section 3 (which used these tools to calculate their tree approximation distance in near-quadratic time) and Section 4/Appendix B (which proposed their method for additive approximation of $(p,k)$-RPW) in as much detail, but the techniques used were sensible. There was also some empirical support for the theoretically established complexities via some simple experiments. Methods And Evaluation Criteria: Yes, the experiments were reasonably done. Theoretical Claims: I checked the arguments of Section 2 (including appendix A) in detail and am convinced of their correctness. Some questions on minor details in latter sections. For sections 3 and 4, I did not have time to check them carefully, but the parts in the main text seem fine upon a more cursory inspection. Experimental Designs Or Analyses: The experiments were relatively simple, but reasonable, in my opinion. Supplementary Material: I read appendix section A, but not appendix section B. Relation To Broader Scientific Literature: The work does a good job of referencing and discussing prior work. I did not notice any glaring omissions. One note though, is that I would have preferred that the authors mention the poor rate of convergence for empirical measures to underlying continuous measures in $p$-Wasserstein distance (a reference is [Weed & Bach 19]: *Sharp asymptotic ... empirical measures in Wasserstein distance*). This is a major practical limitation of the Wasserstein distance that felt conspicuously missing from the sentence in lines 054-055. I understand that it doesn't fit the story well, as I don't think this approach addresses this problem at all, but it felt wrong to omit it entirely. Essential References Not Discussed: See above. Other Strengths And Weaknesses: See below. Other Comments Or Suggestions: Rating Explanation: I appreciated the work and the technical depth of the approach to an important problem. My main reason for not providing a higher score is that the presentation seems a bit disjointed, with two related approximation problems solved with completely different techniques. Much of the intro describes noise sensitivity as an issue, but then the majority of text is spent on the first problem which is not looking at the $p-RPW$ distance. One clarity nit: At the start of 2.1, I would remind people that one is considering a discrete, finite metric space. I found myself confused at $k^u_j$ being an integer for a moment. Questions For Authors: 1. Is the *spread* equal to the ratio of largest to smallest edge costs? These are both denoted with the same $\Delta$ notation, but are never explicitly connected. 2. In line 207, lev(a,b) is never explicitly defined, I don't think. It is easy enough to infer, but please clarify. 3. I don't think I understood the second inequality in the derivation 181-189 (second column, for inequality 1). It was my impression that the telescoping sum would eliminate $H_{k^a_0}$ and $H_{k^b_0}$ if $h \geq 2$. I believe the result still holds, but would have expected different terms. Please clarify. 4. In appendix A, for the proof of lemma 2.2, I expected the second equality to be a $\leq$, as the optimal transport plan for the base metric will be suboptimal for the tree metric, no? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and constructive feedback. We will add a short discussion and a reference about the convergence rate of the Wasserstein distance, as well as the formal definition of spread and the corrected inequality in Appendix A in our next version. We answer the main concern raised by the reviewer below. >My main reason for not providing a higher score is that the presentation seems a bit disjointed, with two related approximation problems solved with completely different techniques. Much of the intro describes noise sensitivity as an issue, but then the majority of text is spent on the first problem. **Response:** Noise sensitivity of $p$-Wasserstein distance is the primary reason why both relative and additive approximations are challenging to design. We elaborate on this below and edit the paper to include this discussion. *Relative Approximations:* Embedding a metric into a tree metric introduces noise, or distortion, to the edge costs. Due to the high sensitivity, the optimal $p$-Wasserstein distance with respect to these noisy edge costs fails to serve as a relative approximation to the $p$-Wasserstein distance with respect to the original costs, i.e., the approximation quality is unbounded. To address this, we reduce the distortion in edge costs by selecting the smallest distortion across $p$ different tree embeddings. We demonstrate that the optimal $p$-Wasserstein distance computed with respect to these edge costs is in fact a $O(\log n)$-approximation of the true $p$-Wasserstein distance. *Additive Approximations:* A small noisy mass of $\delta^p$ can affect the $p$-Wasserstein distance by $\delta$. Consequently, any $\delta$-additive approximation algorithm must transport all but $\delta^p$ mass in an approximately minimum-cost way to ensure a bound of $\delta$ on the additive approximation, which, under reasonable assumptions takes $\Omega(n^2/\delta^p)$ time. In the extreme case, when $p =\infty$, any approximate solver must transport all the mass in an approximately minimum-cost way which requires $\Omega(n^{2.5})$ time. In essence, the sensitivity to noise is precisely why additive approximations for the $p$-Wasserstein distance require $\Omega(n^2/\delta^p)$ time. In contrast, $p$-RPW is more robust, as a noise of $\delta$ impacts it by at most $\delta$. Our algorithm essentially transports all but $\delta$ mass in an approximate minimum-cost way and has an execution time of $O(n^2/\delta^3)$. Thus, it admits faster algorithms precisely because of its robustness to noise. ---- ---- >In line 207, lev(a,b) is never explicitly defined, I don't think. It is easy enough to infer, but please clarify. **Response:** The notation is defined in line 186. ---- ---- >I don't think I understood the second inequality in the derivation 181-189 (second column, for inequality 1). It was my impression that the telescoping sum would eliminate $H_{k^a_0}$ and $H_{k^b_0}$ if $h \geq 2$. I believe the result still holds, but would have expected different terms. Please clarify. **Response:** Thanks for raising this point. While the result of the equation in lines 181-189 is correct, the subscript of H in the first line of the equation has to be changed from $j-2$ to $j+2$. Note that in our notation, the root node (the largest cell) is at level 0. The slight typo resulted since the original paper presenting the HST construction [1] had the root at level $h$. We will change $j-2$ to $j+2$ in our next version. [1] J. Fakcharoenphol, S. Rao, and K. Talwar. "A tight bound on approximating arbitrary metrics by tree metrics." STOC, 2003.
Summary: This paper develops efficient algorithms for approximating the $p$-Wasserstein distance $W_p$ and a robust variant of $W_p$. In particular, when the input measures are uniform over discrete point sets of size $n$, they provide an approximation algorithm for $W_p$ with relative error $O(\log n)$ that runs in time $O(n^2 \log \Delta)$, where $\Delta$ is the ratio between largest and largest edge cost. Previous results of this form only worked with $p=1$. For additive error, they provide a reduction suggesting that getting error $\delta$ requires time $O(n^2/\delta^p)$ unless one resorts to a class of currently impractical algorithms. Then, they show that a robust variant of the $W_p$ can be estimated in time $O(n^2/\delta^3)$ for all $p$. ## Update after Rebuttal I maintain my positive evaluation. Claims And Evidence: Yes. This is primarily a theoretical paper and all theorems are accompanied by proofs which appear sound. They supported their guarantees with some basic experiments, with results that match or beat those predicted by the theory (which I think are sufficient given theoretical nature of the paper). Methods And Evaluation Criteria: Yes, I think the evaluation settings are fine. I assume they are from the setting where both sample sets are drawn from the same distribution (could be clarified). This makes sense because when the true distance is small the relative error is more tolerable. Theoretical Claims: Yes, I read through all of the proofs for the relative approximation result and the reduction, and I read through the main parts of the additive approximation result (though I just skimmed over some auxiliary lemmas in the appendix). Everything looks good to me, assuming that they accurately describe the data structure results from previous work, which I am not very familiar with and did not verify. There is a bit of ambiguity on the computational model. For example, reading the input points / computing the cost matrix takes $n^2 d$ time in $d$ dimensional Euclidean space. I think it's fine to primarily assume the distance matrix can be queried in constant time, but there should probably be a remark on this. Also, I am used to seeing additive approximation results which scale with the largest value of the cost matrix. I'm just confirming that there is not such a dependence here (assuming that $\Delta$ is small and that we can do arithmetic with entries of the distance matrix in constant time). Experimental Designs Or Analyses: The experiment setup seem fine as above. The results are in line (or even a bit better) than their theory predicts. By the way, can the authors elaborate on the $n^{3/2}$ scaling they observe - in particular, do they think there is room for a tighter computational complexity bound? Can the authors please confirm that they will publish their code if the paper is accepted? Supplementary Material: I did not review the code, but did read through the appendices (though I just skimmed the auxiliary lemmas of the last result). Relation To Broader Scientific Literature: There is lots of work on statistics and computation of OT and its robust variants, which they discuss. I think they appropriately cite the relevant work on OT computation, though I work more on the statistical side so I cannot be certain. I am only loosely familiar with the data structure results that they use. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: This paper is well-written, and I think that the first result and its proof ideas are quite nice - honestly enough to justify acceptance on their own. I could imagine that their tree metric might have other applications in computational geometry. Since I am not as familiar with their robust variant, I am less sure of the significance of that result. E.g. it could be the case that it is easier to calculate for reasons that make it less useful in practice, though I am not claiming that to be the case. Other Comments Or Suggestions: Could you include the dependence of your complexities on p in the theorem statements? It seems to pretty well accounted for in the proofs, although I suppose there should be another multiplicative overhead of p when you perform arithmetic on entries of the exponentiated cost matrix (depending on the computational model). Define LMR initialism It seems that each cluster C is defined as a subset of points, so I am not sure what the notation X_C is needed for. There seems to be a typo in the definition of $k_j^x$ I think the quantity $d(C)$ used in the statement of Lemma 2.3 is only defined in the proof. Questions For Authors: When p=1, how close is the robust W1 variant you consider to Dudley's bounded Lipschitz distance (IPM wrt Lipschitz and bounded functions)? That distance is also upper bounded by W1 and TV. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and constructive feedback. We will update the paper to include your suggestions. We assume that distances can be queried in $O(1)$ time and will state it explicitly. We will also include additional dependencies of execution time on $p$ that arise due to taking the $p$th power of the costs. >Can the authors elaborate on the $n^{3/2}$ scaling they observe - in particular, do they think there is room for a tighter computational complexity bound? **Response:** Yes, it is possible to achieve a bound of $\tilde{O}(n^{3/2})$ for the special case of the minimum-cost bipartite matching problem provided the HSTs are precomputed. Sharathkumar and Agarwal [1] showed that Gabow and Tarjan's algorithm [2] for min-cost bipartite matching can be implemented using $\tilde{O}(n^{3/2})$ queries to the bichromatic closest pair data structure. Combining this with the structure of Section 2, we can bound the execution time by $\tilde{O}(n^{3/2})$ time algorithm. We will include this discussion in the next version of our paper. [1] R. Sharathkumar and P. K. Agarwal. "Algorithms for the transportation problem in geometric settings." SODA, 2012. [2] H. Gabow and R. E. Tarjan. "Faster scaling algorithms for network problems." SIAM Journal on Computing 1989. ---- ---- >Since I am not as familiar with their robust variant, I am less sure of the significance of that result. E.g. it could be the case that it is easier to calculate for reasons that make it less useful in practice, though I am not claiming that to be the case. **Response:** The $p$-Wasserstein distance's sensitivity to noise is a key factor limiting its practical utility and making the design of an additive approximation particularly challenging. A small noisy mass of $\delta^p$ can affect the $p$-Wasserstein distance by $\delta$. Consequently, any $\delta$-additive approximation algorithm must transport all but $\delta^p$ mass in an approximately minimum-cost way to ensure a bound of $\delta$ on the additive approximation, which, under reasonable assumptions takes $\Omega(n^2/\delta^p)$ time. In the extreme case, when $p =\infty$, any approximate solver must transport all the mass in an approximately minimum-cost way which requires $\Omega(n^{2.5})$ time. In essence, the sensitivity to noise is precisely why additive approximations for the $p$-Wasserstein distance require $\Omega(n^2/\delta^p)$ time. In contrast, $p$-RPW is more robust, as a noise of $\delta$ impacts it by at most $\delta$. Our algorithm essentially transports all but $\delta$ mass in an approximate minimum-cost way and has an execution time of $O(n^2/\delta^3)$. Therefore, the $p$-RPW distance is not only robust to noise but also allows for faster algorithms, a direct consequence of this robustness. ---- ---- >Can the authors please confirm that they will publish their code if the paper is accepted? **Response:** Yes, we will make our GitHub repository public. ---- ---- >When $p=1$, how close is the robust W1 variant you consider to Dudley's bounded Lipschitz distance (IPM w.r.t Lipschitz and bounded functions)? **Response:** For $p = 1$ and in a metric space with a unit diameter, the $1$-RPW can be written as $\sup_{Lip(f) \le 1-||f||_\infty} |\int f d\mu - \int f d\nu|$, which is upper-bounded by Dudley's bounded Lipschitz distance. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. I mostly buy the second response, although I imagine there are settings where the sensitivity is a feature. In any case, I do not object to RPW being useful and worth studying. The final connection to Dudley's metric is nice. I maintain my positive score.
Summary: This paper is concerned with developing a new algorithm for estimating p-Wasserstein distances. Notably, this algorithm enables approximating the $p$-Wasserstein distance between distributions supported on $n$ atoms up to a multiplicative factor which scales as $O(log(n)$ (in expectation), i.e. the approximation satisfies $0\leq \mathbb E[A(\mu,\nu)]\leq C\log(n) W_p(\mu,\nu)$. The approximation can be computed in time scaling as $O(n^2\log n \log U\log \Delta)$, where $U,\Delta$ are problem dependent parameters. The approach is based on using a number of hierarchically well-separated trees which are constructed independently and computing a "tree-based distance" for each instance. This distance can be computed efficiently using a dynamic bichromatic closest pair (BCP) data structure which can be used to compute Wasserstein distances exactly in $O(n^2\Phi(n)\log(U))$ time, where $\Phi(n)$ is the query/update time for the underlying data structure (in this implementation, $O(\log(\Delta)\log(n))$). It is then argued that a $\delta$-additive approximation of the $p$-Wasserstein distance with an execution time of $O(n^2/\delta^{p(1-\epsilon)})$ is unlikely to exist. On the other hand, by adapting a version of the LMR algorithm algorithm, it is shown that a $\delta$-additive approximation of the so-called RPW problem can be obtained in $O(n^2/\delta^3)$ time. The paper concludes with some experimental validations of these findings. ## update after rebuttal The author's rebuttal addressed my primary question with this work, I have updated my score in consequence. Claims And Evidence: The paper claims to provide an improved algorithm for approximating the $p$-Wasserstein distance between distributions supported on $n$ atoms up to a multiplicative factor which scales as $O(log(n)$ (in expectation), i.e. the approximation satisfies $0\leq \mathbb E[A(\mu,\nu)]\leq C\log(n) W_p(\mu,\nu)$ and that this approximation can be computed in time scaling as $O(n^2\log n \log U\log \Delta)$. The article carefully explains the ideas underlying this approximation and the mathematical results are supported by proofs provided in the supplement. As such, I have no concerns regarding the accuracy of the claims made in the paper. The claims regarding the RPW problem and $\delta$-additive approximations are also well supported and appear reasonable to me. These claims are also empirically validated in numerical experiments which I find to be sufficient. Notably, the multiplicative scaling factor in the approximation is seen to scale sublinearly. Methods And Evaluation Criteria: While the experiments are not extensive, they adequately demonstrate the claimed results. Theoretical Claims: I did not verify the proofs in the supplement. Experimental Designs Or Analyses: I believe the experiments are sound. Supplementary Material: No Relation To Broader Scientific Literature: The paper advances our understanding of algorithms for numerical resolutions of optimal transport and furnishes an efficient algorithm for estimating the RPW problem (based on a modification of a known algorithm). Essential References Not Discussed: N/A Other Strengths And Weaknesses: I believe the paper is quite well-written and provides clear explanations. The algorithmic aspects of the work are particularly well explained. The algorithm for the RPW distance is also a nice contribution and its scaling appears favorable. My concern lies with the quality of the approximation provided for the Wasserstein distance. In effect, a $O(\log n)$ multiplicative factor in the quality of the approximation appears very undesirable even if it is more efficient to compute. Although the experiments demonstrate that in two simple examples the dependence on $n$ is not bad, the approximation error is on the order of 1.5, which implies that this value is not a very good estimate for the actual Wasserstein distance. Other Comments Or Suggestions: Line 188 left column: distace -> distance Questions For Authors: My only question pertains to the $O(\log n)$ multiplicative factor in the approximation of the Wasserstein distance. Perhaps I have misunderstood something, but this appears to be be a very serious limitation of the approach and implies that the quality of the approximation is effectively unknown. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and constructive feedback. We answer the main concern raised by the reviewer below. >The importance of $O(\log n)$ approximation factor. **Response:** A tree-embedding-based $O(\log n)$-approximation for the $1$-Wasserstein distance originally developed by [1, 2] has been effectively utilized in various applications. These applications include the design of the FlowTree algorithm [3], $k$-NN and LSH data structures [4, 5], $1$-Wasserstein barycenter computation [6, 7], streaming algorithms for $1$-Wasserstein [8], design of $(1+\varepsilon)$-relative approximation algorithms [9, 10] for the $1$-Wasserstein distance, improving the accuracy of additive approximations such as Sinkhorn for the $1$-Wasserstein distance [11], and defining related metrics such as the tree-sliced Wasserstein distance as a proxy for the $1$-Wasserstein distance [12]. For many of these applications, a bound of $O(\log n)$ on the approximation factor is critical. Despite considerable efforts including some lower bounds [13], there are very few known approximation algorithms for the $p$-Wasserstein distance for $p > 1$ [14, 15]. In this paper, we introduce the **first $O(\log n)$-approximation algorithm** for computing the $p$-Wasserstein distance for any fixed $p \ge 1$. Extending our algorithm to applications such as $k$-NN under the $2$-Wasserstein distance or boosting the accuracy to $(1+\varepsilon)$-approximation remains an important future direction of research. ---- [1] M. S. Charikar. "Similarity estimation techniques from rounding algorithms". STOC, 2002. [2] J. Kleinberg and E. Tardos. "Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields." JACM, 2002. [3] A. Backurs, Y. Dong, P. Indyk, I. Razenshteyn, and T. Wagner. "Scalable nearest neighbor search for optimal transport." ICML, 2020. [4] P. Indyk and N. Thaper. "Fast image retrieval via embeddings". International Workshop on Statistical and Computational Theories of Vision, 2003. [5] T. Liu, A. Moore, K. Yang, and A. Gray. "An investigation of practical approximate nearest neighbor algorithms." NeurIPS 2004. [6] T. Le, V. Huynh, N. Ho, D. Phung, and M. Yamada. "Tree-Wasserstein barycenter for large-scale multilevel clustering and scalable Bayes." ArXiv:1910.04483, 2019. [7] P. K. Agarwal, S. Raghvendra, P. Shirzadian, and K. Yao. "Efficient Approximation Algorithm for Computing Wasserstein Barycenter under Euclidean Metric." SODA, 2025. [8] X. Chen, R. Jayaram, A. Levi, and E. Waingarten. "New streaming algorithms for high dimensional EMD and MST." STOC, 2022. [9] J. Sherman. "Generalized preconditioning and undirected minimum-cost flow." SODA, 2017. [10] P. K. Agarwal, S. Raghvendra, P. Shirzadian, and K. Yao. "Fast and accurate approximations of the optimal transport in semi-discrete and discrete settings." SODA, 2024. [11] P. K. Agarwal, S. Raghvendra, P. Shirzadian, and R. Sowle. "A higher precision algorithm for computing the 1-Wasserstein distance." ICLR, 2023. [12] T. Le, M. Yamada, K. Fukumizu, and M. Cuturi. "Tree-sliced variants of Wasserstein distances." NeurIPS 2019. [13] A. Andoni, A. Naor, and O. Neiman. "Impossibility of sketching of the 3d transportation metric with quadratic cost." ICALP, 2016. [14] N. Lahn and S. Raghvendra. "An $O(n^{5/4})$ Time $\varepsilon$-Approximation Algorithm for RMS Matching in a Plane." SODA, 2021. [15] P. K. Agarwal and J. M. Phillips. "On bipartite matching under the RMS distance." CCCG 2006.
null
null
null
null
null
null
Graph Attention is Not Always Beneficial: A Theoretical Analysis of Graph Attention Mechanisms via Contextual Stochastic Block Models
Accept (poster)
Summary: The paper rigorously investigates when graph attention mechanisms help—and when they do not—in the context of node classification for graphs generated by Contextual Stochastic Block Models (CSBM). It introduces a simplified non-linear attention mechanism and demonstrates theoretically that attention improves classification when structure noise outweighs feature noise, but may degrade performance when the reverse is true. The analysis further claims that, in high signal-to-noise regimes, graph attention can effectively counteract the over-smoothing problem that plagues traditional graph convolutional networks. Building on these insights, the authors propose a novel multi-layer Graph Attention Network architecture that substantially relaxes the conditions for perfect node classification compared to single-layer variants. Experiments on synthetic and real-world datasets seem to corroborate these theoretical findings. Claims And Evidence: While the paper provides rigorous theoretical analyses and supportive experiments within the CSBM framework, some claims may be viewed as less convincingly supported in broader contexts. In particular, the assertion that multi-layer GATs can achieve perfect node classification under significantly relaxed signal-to-noise conditions is heavily dependent on idealized assumptions inherent in the CSBM, which might not extend to more heterogeneous real-world graphs. Methods And Evaluation Criteria: One notable limitation is the reliance on only a few standard datasets—Citeseer, Cora, and Pubmed—which, while widely recognized, are relatively small and may not represent the complexity or scale of contemporary real-world graphs. This constrained evaluation could limit the generalizability of the findings, as the performance and robustness of the proposed multi-layer GAT architecture in larger, more diverse networks remain untested. Evaluating on a broader and more challenging collection of datasets would provide stronger evidence of the method’s practical utility across a variety of realistic scenarios. Theoretical Claims: I examined the proofs for the core theoretical claims, particularly those underpinning Theorems 1, 2, and 3. Overall, the proofs appear largely rigorous; however, some steps—especially those involving asymptotic bounds and the handling of high-probability events—could benefit from additional clarification. Experimental Designs Or Analyses: I reviewed the experimental designs and analyses, particularly those in Section 4. The experiments are generally well-structured to validate the theoretical claims, with controlled synthetic settings that mirror the assumptions of the CSBM and provide clear benchmarks for evaluating over-smoothing and SNR improvements. However, as I mentioned above, the limited number and scale of the real-world datasets used is a notable issue. Supplementary Material: Yes, I reviewed the supplementary material, focusing primarily on the detailed proofs provided in Appendices D, E, and F, which cover the derivations and technical lemmas underpinning Theorems 1, 2, and 3. Relation To Broader Scientific Literature: The paper’s contributions extend a well-established line of research on graph neural networks by deepening our understanding of when and how graph attention mechanisms provide benefits. It builds on previous findings regarding the limitations of standard aggregation methods, such as over-smoothing in deep architectures, and clarifies the interplay between different types of noise in graph data. By deriving precise conditions under which attention mechanisms enhance performance compared to simpler operations, the work refines theoretical models of node classification. Moreover, it introduces a multi-layer architecture that relaxes previously stringent signal-to-noise requirements. Essential References Not Discussed: While I appreciate the contributions of the paper, a discussion of previous attempts at rigorously understanding attention in graph neural networks seems to be missing or at the very least incomplete. In particular, [1] establish that attention in GNNs cannot mitigate oversmoothing. Could the authors clarify why this does not contradict their results? [1] Wu, X., Ajorlou, A., Wu, Z. and Jadbabaie, A., 2023. Demystifying oversmoothing in attention-based graph neural networks. Advances in Neural Information Processing Systems, 36, pp.35084-35106. Other Strengths And Weaknesses: - Other Comments Or Suggestions: I would encourage the authors to extend their experiments to larger datasets. For example, I would appreciate it if the authors could provide some results on the heterophilous node classification datasets [2]. [2] Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A. and Prokhorenkova, L., 2023. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. arXiv preprint arXiv:2302.11640. Questions For Authors: Please see "Essential References Not Discussed". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and recognition of our paper. In response to your questions and suggestions, we provide the following clarifications: ### 1. **Additional experiments on more comprehensive datasets:** Based on your suggestion, we conducted supplementary experiments on a larger dataset (*obgn-arxiv*) and five heterophily datasets [2] to validate the correctness and practical value of our theoretical results. Please refer to the following link for the results and details: https://drive.google.com/file/d/1ALWkkazk1LPjaWSSkL28RCM7ywOsoBEW/view?usp=drive_link. Below is a brief overview of the experimental setup and corresponding results. (1). According to our theoretical findings, when feature noise dominates, the graph attention mechanism becomes ineffective, and we should reduce the attention intensity parameter $t$. Conversely, when structure noise dominates, the attention mechanism becomes effective, and we should increase the attention intensity parameter $t$. Accordingly, we design GATv2*(temp), a graph attention model with adjustable attention intensity, based on this idea. Actually, the parameter $t$ can be seen as the reciprocal of temperature adjustment coefficient $T $ applied to the softmax layer in the attention coefficient computation (i.e., $ T = t^{-1} $). Therefore, we adjust the attention mechanism's intensity by tuning the softmax temperature coefficient $T$. In the original paper, this experiment was implemented only on simulated datasets, but we extend it to real-world datasets in the supplementary experiments. We conduct experiments on six real-world datasets, and for the heterophily datasets with stronger structure noise, we use a design to enhance the attention intensity by setting $ T = t^{-1} = [0.2, 0.5, 1] $ for the three layers. For the more homophilic *ogbn-arxiv* dataset, we used an initial smaller and then progressively larger attention intensity (corresponding to the discussion in lines 369-376 of the paper), i.e., $ T = t^{-1} = [2, 2, 1] $. The accuracy results of the different models are shown in Figure 1 and Table 1 in the above link, with GATv2*(temp) achieving the best overall performance. (2). Additionally, we conduct a comparison of different models under two types of noise on the *ogbn-arxiv* dataset and plot accuracy heatmaps for each model. Upon observation, we find that the GAT-based method show an improvement over GCN primarily in areas with stronger structure noise, which is reflected in the upper portion of the heatmap. This validates our theoretical findings. (3). In the case of strong feature noise, the GAT-based method did not show a significant performance degradation. Upon visualizing the parameters, we find that the GAT method, through learning, assigned nearly equal weights to all neighbors, eventually degenerating into a GCN method. However, when structure noise is strong, GAT perform a noticeable selection of valuable neighbors, which lead to superior performance over GCN. These results can be clearly seen in Figure 3 of the linked material. ### 2. **Clarification on the contradiction with [1]:** This is a very valuable question, and we appreciate your inquiry. The apparent contradiction with the conclusions in [1] lies in the different definitions in the measure of over-smoothing. The over-smoothing definition used in [1] is derived from [3], where the relationship between node features and the number of layers $ L $ is analyzed without considering the relationship between layer depth and the number of nodes $n$. As a result, even if over-smoothing occurs after $ L = \omega(n) $ layers, the over-smoothing measure defined in [1] would still detect it as an over-smoothing phenomenon, which does not align with real-world scenarios. In practice, over-smoothing typically happens at much smaller depths than the total number of nodes. Therefore, we introduce an improved measure of over-smoothing in our paper, restricting $ L = O(n) $, as detailed at the beginning of Section 3.3 and in Definition 2. Additionally, our simulation results, shown in Figure 1(c), demonstrate this phenomenon: in regimes with sufficiently high SNR, as the attention intensity $ t $ increases, the over-smoothing measure (i.e., node-similarity measure) decays from an exponential to a nearly linear change. [1] Wu, X., Ajorlou, A., Wu, Z., and Jadbabaie, A. Demystifying oversmoothing in attention-based graph neural networks. [2] Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., and Prokhorenkova, L. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. [3] Rusch, T. K., Bronstein, M. M., and Mishra, S. A survey on oversmoothing in graph neural networks.
Summary: The paper studies effectiveness of graph attention networks in the contextual stochastic block model (CSBM) setting. It builds on prior work by Fountalakis et al (JMLR, 2023) and graph attention retrospective in the same setting. In comparison to that work, the main difference appears to be a linear multi-layer graph attention mechanism with non-linearity at the end, along with an assumed simpler attention scoring function (Eq. 3). The main focus is on theoretical results in this simplistic setting, and trade-offs between SNR (Eq. 2) and inverse SNR relative to over-smoothing. There are several results on over-smoothing, but most compelling ones are in Theorems 3 and 4 which indicate at setting where linear multi-layer GAT might be helpful and graph convolutions suffer from over-smoothing. ### POST-REBUTTAL I will not be arguing against this paper and if the other reviewers feel strongly about it, I'm fine that it is accepted. Claims And Evidence: Several theoretical results but not really sure how realistic the underlying assumptions are and the overall setting, especially in the context of real-world applications and graph attention architectures. Experimental results provided on synthetic data and three real-world datasets. In the latter, there does not appear to be differentiation in performance between GAT* (multi-layer linear graph attention) and GCNs. Methods And Evaluation Criteria: This is a theoretical paper and might be fine with a limited set of experiments. However, these don’t seem to illustrate difference between GCNs and GAT*s on real-world datasets. Proof techniques, in my understanding, are mimicking prior work by Fountalakis et al (JMLR, 2023) and in that regard the contribution is rather incremental. Theoretical Claims: I have not checked the proofs carefully, but have skimmed through few of them. What I did checked seemed fine, but there was one aspect that was confusing. The entire analysis seems to be done via the simplistic attention mechanism in Eq. (3). In that regard, the mechanism in Fountalakis et al (JMLR, 2023) while still simplistic seemed more characteristic of what one can expect in GATs. Experimental Designs Or Analyses: Limited set of experiments and this aspect of the paper can be improved, especially when illustrating the point on GCNs and GAT* on real-world datasets. Supplementary Material: Skimmed through the appendix Relation To Broader Scientific Literature: Related work appears to be adequately covered. Essential References Not Discussed: I’d say that the main reference was listed but the discussion could be improved relative to Fountalakis et al (JMLR, 2023), both in terms of proof techniques, limitations of the problem setting, and results. Other Strengths And Weaknesses: See above Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. Our response is as follows: ### 1. **Additional Experiments** You mentioned that the experiments on real-world datasets were insufficient. We greatly appreciate your feedback and have conducted additional experiments on more comprehensive datasets, with the results available at https://drive.google.com/file/d/1ALWkkazk1LPjaWSSkL28RCM7ywOsoBEW/view?usp=drive_link. We select an additional 6 datasets, including a larger-scale dataset, *ogbn-arxiv*, and 5 heterophily graph datasets (e.g., *roman-empire*). In addition to the basic GAT model, we also employ GATv2 attention mechanism. These new experiments on the extended datasets validate both the correctness and practical value of our theoretical results. Specifically, based on our theoretical findings (Theorem 2), similar to GAT*, we design GATv2*(temp), which adjusts the attention intensity (i.e., the parameter $ t $ in the paper). We confirm that by simply selecting the value of $ t $ based on dataset and noise characteristics, we can achieve better results than the baseline GAT model across multiple datasets (see Figure 1 and Table 1 in the above link). In the presence of feature noise and structure noise, we plot accuracy heatmaps for different algorithms to make a more detailed comparison. GATv2*(temp) performed the best, particularly under high noise conditions (see Figure 2 in the above link). Notably, we find that the parameter $ t $, which controls the attention intensity, is actually the inverse of the temperature coefficient $ T $ that controls the sharpness of the softmax output attention coefficients distribution (i.e., $ t = T^{-1} $), so we adjust the attention intensity by tuning the value of $ T $ in the experiments. Furthermore, we observe that when only feature noise was added, the GAT-based method did not show a clear advantage or disadvantage compared to GCN. By visualizing the attention weights (see Figure 3 in the above link), we find that GAT assigned nearly equal weights to all neighbors, eventually becoming a GCN. However, with strong structure noise, GAT successfully selected valuable neighbors and outperformed GCN. ### 2. **Comparison of Proof Techniques with Fountalakis et al. (JMLR, 2023)** Our work is inspired by the JMLR paper, but the proof techniques we use are significantly different, and we have derived broader and more novel results. Specifically: (1) **We consider multi-layer GAT, whereas Fountalakis et al. (JMLR, 2023) considers a single-layer GAT.** We must emphasize that this extension is **non-trivial** and involves several theoretical challenges. The analysis of multi-layer GATs requires us to accurately characterize the distribution of node features after passing through the GAT layers, rather than simply focusing on the sign of the output, as in single-layer GAT analysis (as done in JMLR23). The attention mechanism used in JMLR23 is too complex to be analyzed in a multi-layer setting, which is why we choose to design a simpler attention mechanism for theoretical analysis. Although this mechanism is simpler, we theoretically prove that its performance in the CSBM perfect node classification task is comparable to that of the attention mechanism in JMLR23, as shown in Theorem 1. The proof techniques used here are the only part that shares similarities with those in JMLR23. However, the core of our proof is presented in Theorem 2, which precisely characterizes the distribution of node features after one GAT layer. This is a significant challenge because, even with the simplest attention mechanism, the process is highly nonlinear, meaning that after passing through the GAT layer, node features no longer follow a Gaussian distribution. Moreover, this distribution is also influenced by the number and distribution of neighboring nodes, introducing further randomness, all of which make it difficult to accurately characterize this distribution. In our paper, proof of Theorem 2 spans pages 17 to 32 of the appendix, and this part forms the core of our theoretical analysis. Finally, extending from single-layer to multi-layer analysis is a well-known challenge in deep learning theory, with related works published in top venues such as FOCS, COLT, and JMLR. (2) **We also address the over-smoothing issue (see Theorem 3).** This analysis was not covered in Fountoulakis et al. (2023) and involves many completely different proof techniques. We define a new measure for over-smoothing and, using the results from Theorem 2 on the distribution changes of node features in multi-layer GAT, proved that graph attention mechanisms can positively mitigate the over-smoothing problem. We also find that the effectiveness of graph attention mechanisms in addressing over-smoothing depends on the SNR of the graph data. The higher the initial SNR, the more pronounced the role of the graph attention mechanism. Thank you for your thoughtful feedback, and we look forward to further discussions. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments. Unfortunately, I’m not keen on following a link to google-drive and would appreciate if the experiments were summarized in the response. My understanding is that the additional experiment shows that GATv2* version does behave as the theoretical results indicate but that the same might not be true for GAT*. I have listed in my summary that the paper considers multi-layer GAT but not in the classical sense. Namely, the focus is on multi-layer *linear GAT* with a single non-linearity at the end. While slightly different from Fountalakis et al. where there was a single linear layer, the difference is not substantial relative to how GATs are used in practice. I also did not rate theoretical contributions as trivial but the results, findings, and overall technique as incremental. I have read all the reviews and the rebuttal, and will make the final decision during the discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your response. For your convenience, as well as for those reviewers who may not have had time to view the link, we summarize the key aspects of our additional experiments as follows: ## **1. Experimental Settings and Main Results** We conducted supplementary experiments on six additional datasets and explored two attention mechanisms: GAT and GATv2. For the *ogbn-arxiv* dataset, we gradually increased both feature noise and structure noise, and recorded the performance of each model. For comparison, in addition to the GAT* setup from the original paper (which applies a GCN layer followed by a graph attention layer), we designed GAT*(temp) and GATv2*(temp) to precisely control the intensity of the attention mechanism, parameterized by $ t $. Notably, GAT* can be seen as a **special case** of GAT*(temp) where the first layer’s attention strength is fixed at $t=0$. Another motivation behind introducing GAT*(temp) and GATv2*(temp) was to handle heterophily graph datasets, where increasing rather than decreasing the attention intensity is necessary. For the larger homophily dataset *ogbn-arxiv*, we use GAT(v2)* and GAT(v2)* (temp) as comparison methods, with the primary focus on reducing the attention intensity $ t $. For the five heterophily datasets, we use GAT(v2)*(temp) as the comparison method, with a larger attention intensity $ t$. Below, we provide a table that reports not only the classification accuracy of each model across different datasets but also the corresponding values of the attention intensity parameter $ t $. | | ogbn-arxiv | ogbn-arxiv (with high $F_{noise}$) | ogbn-arxiv (with high $S_{noise}$) | Attention Intensity $t$ | roman-empire | amazon-ratings | minesweeper | questions | tolokers | Attention Intensity $t$ | |-------------|----------------|-----------|------|----------|------------|-----------|--------|----------|----------|----------| | GCN | 71.06% | 69.23% | 47.89% | $[0, 0, 0]$ | 32.94% | 46.96% | 58.21% | **57.44%** | 65.04% | $[0, 0, 0]$ | | GAT| 70.86% | 68.63% | 50.31% | $[1, 1, 1]$ | 44.72% | 48.62% | 65.95% | 52.68% | 65.46% | $[1, 1, 1]$ | | GAT*| 70.89% | 69.60%| 50.60% | $ [0, 1, 1] $ | /| /| /| /| / |/ | | GAT*(temp)| 70.98% | 69.57% | 50.86% | $ [\frac{1}{2},\frac{1}{2}, 1] $ | 46.69% | 49.08% | 69.33% | 52.81% | 66.71% | $ [2, 2, 1] $ | | GATv2| 71.41% | 69.17% | 60.04% | $ [1, 1, 1] $ |74.45% | 48.89% | **70.92%** | 53.85% | 66.46% | $ [1, 1, 1] $ | | GATv2*| 71.19% | 69.82% | 60.29% | $ [0, 1, 1] $ | / | / | /| /| / | / | | GATv2*(temp) | **71.61%** | **69.97%** | **61.36%** | $ [\frac{1}{2},\frac{1}{2}, 1] $ | **76.91%** | **49.38%** | **70.89%** | 53.12% | **67.44%** | $ [2, 2, 1] $ | ## **2. Conclusion** As you pointed out, our experiments validate our theoretical results. Specifically, GAT*(temp) and GATv2*(temp) consistently outperform baseline models. The results for GAT* and GATv2* also align with our theoretical conclusions, though models with tunable $t$ (i.e., the (temp) variants) demonstrate superior performance. On the *obgn-arxiv* dataset, graph attention-based methods show a significant improvement when structure noise is strong. However, when feature noise is dominant, they have little to no effect (with the original version of GAT performing even worse than GCN). Moreover, in heterophily graph datasets, although it falls outside the scope of our paper ($p > q$), this can be seen as a case with very strong structure noise, where the noise from neighboring nodes features even outweighs the information. A simple conjecture is that a stronger graph attention intensity, i.e., a larger parameter $ t $, is needed, which is also confirmed by our experiments. ## Additional Clarification: We would like to clarify that, although our theoretical analysis is based on a simplified version of GAT, it provides new insights by precisely characterizing the conditions under which graph attention mechanisms are effective. Additionally, our theoretical results offer clear guidance on how to adjust attention intensity under different noise conditions, which serve as a crucial reference for selecting and tuning attention mechanisms in practical applications. During our supplementary experiments, we observed that most existing open-source graph attention mechanisms lack the ability to control over attention intensity. However, our theoretical and empirical findings reveal that incorporating such a control mechanism significantly enhances performance—an improvement that is applicable to nearly all graph attention models. This could be an important broader impact for practical value of our work. Thank you once again for your response. We hope our explanation of the additional experiments has addressed your concerns and look forward to further discussions with you.
Summary: The paper theoretically analyzes the effectiveness of graph attention for node classification tasks in graphs with a CSBM structure and varying levels of feature and structure noise and conclude that high feature noise renders graph attion ineffective whereas graph attention is beneficial in the case of low feature noise and high structure noise. ## update after rebuttal: I thank the authors for answering my questions and appreciate their efforts in providing further results during the rebuttal. While I am not too fond of tuning another parameter (t) for each layer in the model, the concept of attention intensity is interesting and perhaps it could be possible to make it learnable in the future. Also, while it intuitively makes sense that the attention coefficients would vary more when a node has a more varied neighborhood (possibly due to high structure noise), a rigorous theoretical analysis to back it is good. Secondly, attention coefficients do not necessarily always learn varied and sparser patterns due to trainability issues. In such cases, having an aid such as an attention intensity parameter to improve the attention mechanism could be promising. Therefore, I have raised my score. Claims And Evidence: The authors supports their theoretical claims with proofs and empirical evidence from experiments on synthetic data. Methods And Evaluation Criteria: - While the theoretical claims are supported by empirical evidence on synthetic datasets, their practical benefits for real-world scenarios is not very clear. For example, the authors mention that their findings provide insights for practical applications such as selecting graph attention based on graph data characteristics and designing noise-robust networks. However, the SNR ratio, based on which the decision to use graph attention or not is to be made, is not usually a known property of the graph beforehand. - Secondly, it is unclear how $t$ is determined, for instance, is it a learnable parameter or a hyperparameter to be tuned with grid search In the experiments, it is an independent variable that is manually varied. - Furthermore, it is mentioned that the new proposed attention mechanism is designed with homophilic graphs in mind. While this is a limtiation in my opinion given the current focus on devising attention based methods that adapt to both homophily and heterophily [1-4], I would still like to request the authors for further clarification on this, because intuitively it seems that having both negative and positive and negative values for $t$ based on dot product of of two node feature vectors that signify their similarity/dissimilarity and/or alignment/unalignment could also possibly deal with both homophily and heterophily. [1] Eliasof et al. Improving Graph Neural Networks with Learnable Propagation Operator [2] Bo et al. Beyond Low-frequency Information in Graph Convolutional Networks [3] Mustafa et al. GATE: How to Keep Out Intrusive Neighbors [4] Finkelshtein et al. Cooperative Graph Neural Networks Theoretical Claims: The theoretical claims seem correct but the mathematical details were not checked in detail. Experimental Designs Or Analyses: While the experiments to verify the theorical claims of the relationship between SNR ratio and effectiveness of attention mechanism are well designed, I do think that experiments that show the benefit of the proposed/analyzed attention mechanism on real-world datasets is missing. Is it competitive with the standard attention mechanism used in GAT [5] or its various variants? See review section 'method and evaluation criteria' for a similar discussion. [5] Brody et al. How Attentive are Graph Attention Networks? Supplementary Material: The appendices were also reviewed though the proofs were not studied in detail. Relation To Broader Scientific Literature: The role of attention in graph learning is of great interest to the GNN community today. While most literature on improving/understanding graph attention is focused on the original GAT architecture or its various variants, this paper analyzes a simpler attention mechanism but provides novel insights into its relationship with signal-noise ratio of node features. However, its applicability in real-world scenarios is not well-evaluated. Essential References Not Discussed: None that I can recall. Other Strengths And Weaknesses: **Strengths** The paper contributes to theoretically understanding an important aspect of graph learning, i.e. graph attention. **Weaknesses** The analyzed graph attention is different from the mechanism in the original GAT or its variants that are usually employed and the effectiveness of then proposed attention mechanism is unclear in the real-world setting. A more detailed discussion of this is in the evaluation criteria and experiment design sections of the review. Other Comments Or Suggestions: • It may be worthwhile to visualize the distribution of actual attention coefficients ($c_ij$) for the conducted experiments to better understand/verify whats going on under the hood (for example, in a similar way to [3]). [3] Mustafa et al. GATE: How to Keep Out Intrusive Neighbors Questions For Authors: 1. For experiments on real-world datasets, are the original node features discarded, since perfect node classification is achieved? 2. How can the value of $t$ be determined for real-world datasets? 3. Are inter-class edges considered as structural noise? If so, is this a fair assumption as it is not necessarily true and also unlikely in the real-world setting with perfect node classification not always being aligned with the community structure? 4. Further questions inline in the review sections of evaluation criteria and experimental design. Edit: While I am not too fond of tuning another parameter (t) for each layer in the model, the concept of attention intensity is interesting and perhaps it could be possible to make it learnable in the future. Also, while it intuitively makes sense that the attention coefficients would vary more when a node has a more varied neighborhood (possibly due to high structure noise), a rigorous theoretical analysis to back it is good. Secondly, attention coefficients do not necessarily always learn varied and sparser patterns due to trainability issues. In such cases, having an aid such as an attention intensity parameter to improve the attention mechanism could be promising, as shown by the authors in the additional experiments and analysis during the rebuttal. Therefore, I have raised my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. In response to multiple reviewers’ suggestions, we have added experiments on a larger dataset (*ogbn-arxiv*) and five heterophily datasets (e.g., *roman-empire*). Please find the details at the following link https://drive.google.com/file/d/1ALWkkazk1LPjaWSSkL28RCM7ywOsoBEW/view?usp=drive_link. Below, we address your concerns point by point. ### 1. **Practical Benefits in Real-World Scenarios** Our conclusions consider both SNR and structure noise, providing insights into when and how to use graph attention mechanisms. Specifically, by measuring these two factors, we can determine whether to apply graph attention and how to adjust its intensity (see Remark 3 in lines 278-292 and the discussion in lines 369-376). Moreover, the SNR of real-world datasets is not entirely unknown. It can be estimated using the expectation and variance of node features for each class. Even if not all node features are observable, we can approximate SNR using only training data. More importantly, the absolute SNR value matters less than its comparison with structure noise, which helps guide the choice and design of GAT. For example, in heterophily graphs with high structure noise, increasing the intensity of graph attention is beneficial. Conversely, if the features of nodes from different classes are not well distinguishable or have high variance, it may be preferable to avoid using GAT or reduce its intensity. ### 2. **Choice of Parameter $t$** First, we would like to emphasize that the attention mechanism proposed in this paper is a simplified version, designed to distill the essence of various graph attention mechanisms for theoretical analysis. The parameter $ t $ controls the intensity of attention and is set based on structure and feature noise before running experiments. Conceptually, if we view our attention mechanism as assigning +1 weight to intra-class edges and -1 to inter-class edges before applying softmax, then $ t $ functions similarly to the temperature coefficient $ T $ in softmax, controlling the sharpness of the output weight distribution, where $ t = \frac{1}{T} $. ### 3. **Analysis of Heterophily** This is a great question. For the binary symmetric CSBM model, a heterophily analysis can indeed be performed. When $ q > p $, we can modify Equation (3) to assign a weight of $ -t $ to edges where $ X_i X_j > 0 $ and $ t $ to edges where $ X_i X_j < 0 $, without significantly affecting later results. However, we focus on the homogeneity assumption ($ p > q $) because real-world tasks often involve multi-class classification, where simply adjusting $ t $ is insufficient and requires more complex analysis. More fundamentally, in heterophily graphs, a key question is whether GNNs’ message-passing paradigm remains effective. Understanding how to incorporate global structure information into node features should take precedence over studying attention mechanisms in this setting. ### 4. **Experiments and Analysis** As noted earlier, our proposed attention mechanism is a simple version designed for theoretical analysis and does not include learnable parameters. However, as discussed in point 2, our findings suggest a simple way to improve existing attention mechanisms by introducing a temperature coefficient $ T $ (i.e., $ t^{-1} $) in the softmax layer to adjust attention intensity. Our theory indicates that when structure noise dominates, a larger $ t $ (smaller $ T $) is preferable, whereas when feature noise dominates, a smaller $ t $ (larger $ T $) works better. We add this in GATv2, denoted as GATv2* (temp), and show that it consistently outperforms the unmodified GATv2 across multiple datasets, including heterophily graphs (see Figure 1,2 and Table 1 in the link above). ### 5. **Visualization of Attention Coefficients** This is a great suggestion. We add visualizations of attention coefficients in our supplementary experiments (see Figure 3 in the link above). We find that, for the *ogbn-arxiv* dataset, when feature noise increases, the performance of the GAT-based model is similar to that of GCN, with neither improvement nor degradation. The visualizations results show that this happens because GAT, by learning to assign equal weights to all neighbors, essentially degenerates into GCN. However, when structure noise increases, GAT shows significant performance gains (see Figure 1 in the link above for the comparison). The attention weight visualization in Figure 3 in the link above confirms that GAT effectively filters important neighbors in such cases. ### Additional Clarifications: (1) No, the preprocessing of node features is the same as in all GNNs. (2) See point 2 and 4 for details. (3) Not all inter-class edges are considered noise; noise is measured by the ratio of inter-class to intra-class edges, $ \frac{p+q}{p-q} $. We appreciate your feedback and hope these responses clarify your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions and appreciate their efforts in providing further results. While I am not too fond of tuning another parameter (t) for each layer in the model, the concept of attention intensity is interesting and perhaps it could be possible to make it learnable in the future. Also, while it intuitively makes sense that the attention coefficients would vary more when a node has a more varied neighborhood (possibly due to high structure noise), a rigorous theoretical analysis to back it is good. Secondly, attention coefficients do not necessarily always learn varied and sparser patterns due to trainability issues. In such cases, having an aid such as an attention intensity parameter to improve the attention mechanism could be promising. Therefore, I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for recognizing our response and for raising your score. Making the attention intensity a learnable parameter is an interesting idea, and we will explore this direction in the future work.
null
null
null
null
null
null
null
null
ROPO: Robust Preference Optimization for Large Language Models
Accept (poster)
Summary: The paper considers the alignment problem of large language models (LLMs) trained on noisy preference data, where human preferences are flipped with a certain probability $\eta$. To align the LLM in a robust manner and mitigate performance degradation due to noisy data, the authors propose an iterative alignment framework that alternates between training the model in a noise-tolerant manner and filtering out noisy samples. They begin by formulating this as a constrained optimization problem, aiming to minimize a weighted loss over the samples, where the weights are dynamic and intended to be smaller for noisy samples. Moreover, they introduce a constraint ensuring that the sum of these weights remains a fraction of the total number of samples, thereby reducing the effect of noisy data. Next, they analyze the noise tolerance of the Direct Preference Optimization (DPO) loss and demonstrate that it is ineffective in distinguishing between noisy and clean samples. Specifically, DPO aggressively updates the model parameters to fit noisy samples during gradient descent. To address this issue, the authors propose an alternative loss function, $l_{na}$, which exhibits lower tolerance to noise and better differentiates between clean and noisy samples. Furthermore, they introduce the ROPO loss, $l_{ropo}$, which combines $l_{na}$ and $l_{dpo}$ with a trade-off parameter $\alpha$. A detailed discussion is provided on the choice of this parameter and its practical implications. Finally, they propose a rejection sampling strategy that generates new responses for samples identified as noisy and creates candidate samples using both the generated responses and the original responses from the dataset. They then select the candidate sample with minimal loss and include it in the next stage of training. In the experimental section, they evaluate the effectiveness of their proposed method against baselines across three datasets and two base models under different levels of artificial noise injected into the data. Additionally, they conduct ablation studies to examine the impact of different framework components and hyperparameters on overall performance. Claims And Evidence: All theorems stated in the paper are accompanied by proper proofs in the appendix. However, the robustness-guided rejection sampling strategy presented in Section 3.3 lacks a rigorous theoretical foundation and is primarily heuristic in nature. Specifically, the authors sample multiple responses from the model and generate candidate samples using both the newly generated responses and the original responses from the dataset. They then select the candidate sample with minimal loss and include it in the next stage of training. However, a formal justification for the effectiveness and robustness of this strategy in practice is lacking. In particular, there is no guarantee that this strategy would not introduce noisy samples into the data. A more thorough theoretical analysis would strengthen the validity of this approach. Methods And Evaluation Criteria: The proposed method has been properly evaluated on multiple datasets and models. Further, ablation studies have been performed to study the utility of the different components of the method. Although, one limitation would be that the noise has been artificially injected in the dataset, to demonstrate the utility of the approach (In the 0% noise injection case, the performance difference is minimal except for TL;DR dataset). Theoretical Claims: All theorems stated in the paper have proper proofs in the appendix. Experimental Designs Or Analyses: The proposed method has been properly evaluated on multiple datasets and models. Further, ablation studies have been performed to study the utility of the different components of the method. However, in Table 2 and 3, other baselines like cDPO and rDPO are missing. In particular, it would be interesting to compare the performance of rDPO against different components of their proposed ROPO framework in the ablation studies. Supplementary Material: Yes, particularly reviewed the proofs in Appendix F. Relation To Broader Scientific Literature: The paper analyzes the effect of noisy preference data in the alignment of large language models (LLMs). Prior work on this problem primarily focuses on alternative loss functions that are robust to noise and often require prior knowledge of the percentage of noisy data. In this work, the authors first propose a noise-tolerant loss function that facilitates the identification of noisy samples while preventing overfitting to noisy data. Building on this, they introduce an effective noise-filtering strategy to remove noisy samples from the dataset. Furthermore, they propose a robustness-guided rejection sampling technique to introduce new clean samples into the data. Essential References Not Discussed: A few references in the area of robust preference optimization of LLMs are missing. Namely, 1. Choi, Eugene, Arash Ahmadian, Matthieu Geist, Oilvier Pietquin, and Mohammad Gheshlaghi Azar. "Self-improving robust preference optimization." arXiv preprint arXiv:2406.01660 (2024). 2. Bukharin, Alexander, Ilgee Hong, Haoming Jiang, Zichong Li, Qingru Zhang, Zixuan Zhang, and Tuo Zhao. "Robust reinforcement learning from corrupted human feedback." arXiv preprint arXiv:2406.15568 (2024). 3. Yan, Yuzi, Xingzhou Lou, Jialian Li, Yiping Zhang, Jian Xie, Chao Yu, Yu Wang, Dong Yan, and Yuan Shen. "Reward-robust rlhf in llms." arXiv preprint arXiv:2409.15360 (2024). 4. Wu, Junkang, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jiawei Chen, Jinyang Gao, Bolin Ding, Xiang Wang, and Xiangnan He. "Towards robust alignment of language models: Distributionally robustifying direct preference optimization." arXiv preprint arXiv:2407.07880 (2024). 5. Ramesh, Shyam Sundhar, Yifan Hu, Iason Chaimalas, Viraj Mehta, Pier Giuseppe Sessa, Haitham Bou Ammar, and Ilija Bogunovic. "Group robust preference optimization in reward-free rlhf." Advances in Neural Information Processing Systems 37 (2024): 37100-37137. Other Strengths And Weaknesses: Strengths The paper analyzes the effect of noisy preference data in LLM alignment, addressing an important problem in this domain. The authors propose multiple strategies to tackle this issue in an iterative manner, collectively forming a novel framework in this area. Specifically, they first introduce a noise-tolerant loss function that facilitates the identification of noisy samples while preventing overfitting to noisy data. Building on this, they propose an effective noise-filtering strategy to remove noisy samples from the dataset. Furthermore, they introduce a robustness-guided rejection sampling technique to incorporate new clean samples into the data. Weaknesses 1. The robustness-guided rejection sampling strategy in Section 3.3 lacks a rigorous theoretical foundation and is primarily heuristic in nature. Specifically, the authors sample multiple responses from the model and generate candidate samples using both the newly generated responses and the original responses from the dataset. They then select the candidate sample with the minimal loss and include it in the next stage of training. However, a formal justification for the effectiveness and robustness of this strategy in practice is lacking. . In particular, is there any guarantee that this strategy would not introduce noisy samples into the data? 2. Unlike prior noise-robust approaches that require knowledge of the percentage of noisy data, the proposed method iteratively estimates and filters noisy samples. However, the approach introduces a new hyperparameter, $\alpha$, which governs the trade-off between $l_{na}$ and $l_{dpo}$, and must be either estimated or predefined. Additionally, the method requires estimating the filtering ratio $\rho$, which may vary across different datasets and applications. Other Comments Or Suggestions: Consider adding the baseline methods' performance in Table 2&3. Questions For Authors: Kindly refer to weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer GZqo, Thank you for your valuable review. We respond to each comment as follows and sincerely hope that our response can properly address your concerns. Tables can be found in **GZqo.md** in **https://anonymous.4open.science/r/ICML25-ROPO-F6CD** # Claims And Evidence > C1: A formal justification and theoretical analysis for the effectiveness and robustness of the rejection sampling (RS) technique would strengthen the validity of this approach. **Res:** The reliability of RS can be guaranteed by the noise identification capability of our loss function, similar to Theorem 3.5. Specifically, since both our RS and noisy sample filtering use loss values as the criterion for sample selection/filtering, their effectiveness is guaranteed similarly. Due to the rebuttal length limit, we will include the detailed analysis in the paper. We would greatly appreciate your understanding. # Methods And Evaluation Criteria > M1: The noise has been artificially injected in the dataset to demonstrate the utility the approach (under 0% noise injected, the performance difference is minimal except for TL;DR dataset). **Res:** We understand your concern that artificial noise may not align well with real-world scenarios. For this, please see experiments in *Appendix E.3.1, E.3.2* under practical noise coming from annotators' trust in larger models over smaller ones and LLM comparisons. For your convenience, results are also provided in **Tables GZqo-1 and GZqo-2** in the anonymous link. As shown, ROPO significantly outperforms baselines in both settings. While ROPO's improvement may not always be substantial at 0% artificial noise, we humbly believe that experiments across several practical and artificial noise settings sufficiently demonstrate its advantages over baselines and its contributions to preference alignment. # Experimental Designs Or Analyses > E1: In Tables 2 and 3, cDPO and rDPO are missing. In particular, ... rDPO against different components of ROPO. **Res:** We have added them into Tables 2 and 3. Please see **Tables GZqo-3 and GZqo-4** in the anonymous link. # Essential References Not Discussed > R1: A few references [3-7] on robust preference optimization of LLMs are missing. **Res:** We will expend Related Work as follows. **Robust Preference Alignment of LLMs.** Many efforts have been made from various perspectives to achieve robust preference alignment [1-7]. Specifically, [1,2] uses label smoothing to mitigate the impact of preference noise. [3] improves the model’s adaptability to different preference distributions and enables iterative output refinement by jointly optimizing a self-improvement policy and a generative policy. [4] models potentially corrupted preference labels as sparse outliers and solves an $\ell_1$-regularized maximum likelihood estimation problem, thereby consistently learning the true underlying reward. [5] introduces a multi-head reward model (RM) that reflects each head’s confidence in the output reward using the standard deviation of a Gaussian distribution, effectively addresses the challenge of RM imperfections in RM-based RLHF. [6] focuses on different forms of noise and enhances DPO’s resilience to both pointwise and pairwise noise in LLM alignment by leveraging Distributionally Robust Optimization (DRO). [7] robustly aligns LLMs to the preferences of diverse individual groups by incorporating group information into the LLM context and optimizing against the worst-case alignment performance across all groups. Compared to them, our method integrates noise-tolerance and noise-identification capabilities without external models, offering a novel paradigm for robust preference alignment. # Other Strengths And Weaknesses > W1: Please see C1. > W2: ROPO introduces a hyperparameter $\alpha$ and requires estimating $\rho$, which may vary across different datasets and applications. **Res:** Our ablations show that ROPO is insensitive to $\alpha$ and $\rho$ within our recommended range and does not require extensive tuning for different tasks. As stated in Section 4.2, we fix $\alpha=14$ and $\rho=0.2$ without tuning them in most of experiments after we observe that ROPO is insensitive to $\alpha$ and $\rho$. For readers, we also recommend $\rho=0.2$ in practice. As for $\alpha$, we recommend $\alpha=14$ or $30$ on relatively objective tasks (e.g., summarization), and $\alpha=6$ or $14$ on relatively subjective tasks (e.g., dialogue). # Other Comments Or Suggestions > S1: Please see E1. --- [1] Provably Robust DPO: Aligning Language Models with Noisy Feedback [2] A Note on DPO with Noisy Preferences & Relationship to IPO [3] Self-Improving Robust Preference Optimization [4] Robust Reinforcement Learning from Corrupted Human Feedback [5] Reward-Robust RLHF in LLMs [6] Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization [7] Group Robust Preference Optimization in Reward-Free RLHF
Summary: This paper addresses the problem of robustly learning preference from noisy preference data. It proposed the ROPO framework which iteratively filtering noisy preference data and aligning the LLM with the filtered data. The ROPO framework consists three key modules, 1) noise-aware DPO loss for preference alignment, 2) noisy preference filtering based on the noise-aware loss, and 3) Response resampling based on the noise-aware loss. The empirical results show that ROPO is better than other noisy preference learning baseline on extensive datasets. Claims And Evidence: yes, as far as I can see, the claims are well supported. Methods And Evaluation Criteria: The proposed iterative framework does make sense to robustly refine and learn the preference. The empirical results also support this. However, I have one concern, as the framework is heavily enhancing its internal preference judgement, would the following cases undermine the effectiveness of the framework? 1. If the LLM is aligned to the noisy preference in the beginning, would the error be carried and enlarged throughout the iteration? 2. If the target preference is very different from the initial model's judgement, would the preference learning be very inefficient? Theoretical Claims: yes, as far as I can see, the claims are accurate. Experimental Designs Or Analyses: The experiment setup is extensive, especially glad to see the ablation study on noise ratio $\rho$ in Figure 3 and different component in Table 3. One potential concern is all compared baselines are noisy preference learning methods, is there any noisy data filtering baseline that can be compared? Supplementary Material: Appendix A, Algorithm 1. This is the main algorithm of the proposed ROPO framework. This is clear. Appendix B, Related works. This is also clear. Relation To Broader Scientific Literature: As far as I can see, this paper fits into the literature well. There hasn't been paper on iterative framework for robust preference learning. The noisy-aware loss also make sense. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The content of this paper is substantial and the experiments and ablation study are extensive. Weakness: - The presentation of this paper is arguable. The related work section shouldn't be put into appendix. There isn't clear description of the overall framework in the main paper. The overall algorithm is put in Appendix A. Other Comments Or Suggestions: N/a Questions For Authors: Line 245, is this a typo? What's the difference between $l_{na}$ and $l_{dpo}$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer tkDe, Thank you for your valuable review. We respond to each comment as follows and sincerely hope that our response can properly address your concerns. Figures and Tables can be found in **tkDe.md** in **https://anonymous.4open.science/r/ICML25-ROPO-F6CD** --- # Methods And Evaluation Criteria > M1: Would the following cases undermine the effectiveness of the framework? (1) If the LLM is aligned to the noisy preference in the beginning, would the error be carried and enlarged throughout the iteration? (2) If the target preference is very different from the initial model's judgement, would the preference learning be very inefficient? **Res:** These two cases will undermine the effectiveness of **various preference optimization methods**, not just ROPO. Please note that in the *standard practice* of DPO-like methods, preference optimization typically requires the initial model to have undergone *supervised fine-tuning (SFT)* on data that is in-distribution for the DPO algorithm beforehand [1,2]. In other words, **the model should possess a basic ability to judge preferences before preference optimization is performed**. Based on this prerequisite, we derive the gradient and corresponding loss function for ROPO. If the initial SFT model lacks basic capability, a reasonable suggestion is to perform SFT first to enhance its fundamental preference judgment ability, rather than proceeding directly with preference training. Otherwise, both DPO and ROPO would be ineffective. --- # Experimental Designs Or Analyses > E1: Is there any noisy data filtering baseline that can be compared? **Res:** Our experiments in **Appendix E.5** have tested a confidence-based data filtering method and we find that it underperforms the standard DPO, and thus underperforms ROPO. This further supports our claim that the widely used cross-entropy loss (i.e., DPO loss) **cannot serve as a reliable measure of model confidence** in scenarios containing noisy preferences. The details are as follows. According to [3], confidence-based data filtering method is a popular approach to combat noisy preferences. Since our paper focuses on DPO-like methods that leverage implicit rewards, a natural choice is to use implicit rewards and the corresponding loss to reflect confidence. To this end, we conduct experiments of combining DPO with noisy samples filtering (NSF) and rejection sampling (RS) using Mistral-7B as the base model and UFB as the training dataset. For your convenience, the results are shown in **Table tkDe-1** in the anonymous link. As can be seen, the incorporation of noisy samples filtering and rejection sampling degrades the performance of DPO, especially at 20% artificial noise. --- # Other Strengths And Weaknesses > W1: The related work section shouldn't be put into appendix. There isn't clear description of the overall framework in the main paper. The overall algorithm is put in Appendix A. **Res:** We will put the related work section and the overall algorithm in the main text by reorganizing the presentation. --- # Questions For Authors > Q1: Is $\ell_{\rm na}$ in Line 245 a typo? What's the difference between $\ell_{\rm na}$ and $\ell_{\rm dpo}$? **Res:** This is not a typo. If we denote $P = \sigma(\beta\log\frac{\pi_\theta(y_1 \mid x)}{\pi_{\rm ref}(y_1\mid x)} - \beta \log\frac{\pi_\theta(y_2\mid x)}{\pi_{\rm ref}(y_2\mid x)})$, then $\ell_{\rm dpo} = -\log P$ and $\ell_{\rm na} = 1-P$. --- [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model [2] https://huggingface.co/docs/trl/dpo_trainer [3] Impact Preference Noise on the Alignment Performance of Generative Language Models --- Rebuttal Comment 1.1: Comment: Thanks the authors for the rebuttal. I agree with most of the rebuttal, but I think there's misunderstanding to my questions about the method. I'm not questioning what if the sft model disagree with the preference data. The question is more about the preference data ordering. Because the proposed method is iteratively enhancing the models' own judgement, if the model is trained with noisy data in the beginning, it would be more difficult to adjust. --- Reply to Comment 1.1.1: Comment: ## Dear **Reviewer tkDe** and **the other three reviewers** who may read this response, We would like to take this final chance to express our gratitude for your **insightful comments, valuable suggestions, and positive feedback**. **Your time and dedication have made a significant contribution to the improvement of our paper**. --- ## We respond to Reviewer tkDe's Rebuttal Comment as follows, and we sincerely hope that our response provides an appropriate answer to the question. > **M1 in the initial review:** Would the following cases undermine the effectiveness ... would the preference learning be very inefficient? > > **Rebuttal Comment:** I'm not questioning what if the SFT model disagree with the preference data. The question is more about the preference data ordering. Because the proposed method is iteratively enhancing the models' own judgement, if the model is trained with noisy data in the beginning, it would be more difficult to adjust. **Res:** Thanks for patiently pointing out our misunderstanding of your question. *TL;DR:* (i) The performance of ROPO will degrade if the model is trained with noisy data in the beginning, but the impact is smaller than that on DPO based on an analysis of loss functions. (ii) Furthermore, we provide a probabilistic analysis to show that **such a case is unlikely to occur in practice** after shuffling the dataset. 1. **The impact on ROPO is smaller than that on DPO.** Suppose that all the early samples are noisy. According to our assumption that the SFT model possesses basic preference judgment ability (see our rebuttal), the model is likely to assign a large implicit reward margin $\Delta(y_2, y_1, x)=\hat{r}(y_2,x) - \hat{r}(y_1,x)$ to noisy samples $(x, y_1, y_2, y_1 \succ y_2)$. Based on our analysis in Section 3.2 (starting from Line 200 in the right column), DPO aggressively increases the gradient weights, leading to stronger learning from noisy samples; whereas ROPO learns noisy samples with more conservative gradient weights, rather than blindly trusting the preference labels of the noisy samples. Therefore, DPO is more influenced by the large number of early noisy samples, while ROPO is less affected. 2. **Such a case is unlikely to occur in practice.** Suppose we have a dataset containing $N$ samples, where a fraction $\alpha$ are noisy and the remaining $1-\alpha$ are clean. After shuffling the dataset, we are interested in **the probability that the first $\beta$ fraction of the samples contains at least $k\beta N$ noisy samples**. Note that: - The case we are concerned with is when $\beta < \min(\alpha, 1/2)$ and $k \in (1/2, 1)$, as it represents the scenario where noisy samples **dominate** in the **early stages** of training. - Without loss of generality, we assume $\alpha N$, $\beta N$, and $k\beta N$ are **integers** for ease of computation. We model it using the **hypergeometric distribution**, which describes the probability of drawing a specific number of "successes" (noisy samples) in a subset of the dataset without replacement. Let $X$ denote the number of noisy samples among the first $\beta N$ samples. Since the dataset is randomly shuffled, $X$ follows a hypergeometric distribution with population size $N$, number of noisy samples $\alpha N$, and sample size $\beta N$. Thus, the interested probability is: $$ P(X \ge k \beta N) = \sum_{x=k\beta N}^{\beta N} \frac{\binom{\alpha N}{x} \binom{(1-\alpha)N}{\beta N - x}}{\binom{N}{\beta N}}. $$ For large $N$, we can approximate the hypergeometric distribution with a normal distribution. The mean and variance are: $$ \mu = \beta N \cdot \frac{\alpha N}{N} = \alpha \beta N,\quad \sigma^2 = \beta N \cdot \alpha(1-\alpha) \cdot \frac{N - \beta N}{N-1} \approx \alpha(1-\alpha)\beta(1-\beta) N. $$ Therefore, the probability is approximately: $$ P(X \ge k \beta N) \approx 1 - \Phi\left( \frac{k \beta N - \mu}{\sigma} \right) = 1 - \Phi \left( \frac{(k-\alpha)\sqrt{\beta N}}{\sqrt{\alpha(1-\alpha)(1-\beta)}} \right), $$ where $\Phi$ is the CDF of the standard normal distribution. The probability $P(X \ge k \beta N)$ is **very small** for the following reason. Because $k > \alpha$, the threshold $k\beta N$ exceeds the expected number of noisy samples in the subset ($\mu=\alpha\beta N$). This corresponds to a **rare right-tail event** in the distribution, where the probability diminishes sharply as the threshold moves further from the mean. The normal approximation quantifies this rarity via the rapidly decaying tail of the Gaussian distribution. *A numerical verification:* The larger $\alpha, \beta$ are and the smaller $k$ is, the larger the probability $P(X \ge k\beta N)$ will be. However, when $\alpha=0.49, \beta=0.48, k=0.51$, we have $P(X \ge 0.51\cdot 0.48 \cdot N) < 0.004$ for $N \ge 5000$ and $P(X \ge 0.51\cdot 0.48 \cdot N) < 0.00007$ for $N \ge 10000$. Therefore, we can see that the probability is really very small in practice.
Summary: This paper tackles the important problem of learning from noisy offline preference data. Motivated by the observation that previous noise-aware preference optimization methods either only partially mitigate the noise problem or requires costly invocation of a separate LLM during the training process, the authors proposed an iterative noise-aware preference alignment method RObust Preference Optimization (ROPO). ROPO combines a robust loss, a noisy sample filtering process, and rejection sampling. On common preference tuning benchmark datasets UFB, Alpaca, and TL;DR, the authors demonstrate that ROPO consistently outperforms previous methods, establishing a practical method for handling preference noise. Claims And Evidence: The authors claim that noise in preference data is prevalent and properly handling the noisy samples is critical for preference alignment. The experiments designed in this study adequately support this claim and the proposed methods that mitigate the noisy preference data issue led to a clear performance improvement. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sound and aligns with widely adopted evaluation processes adopted by the alignment research community. Theoretical Claims: I did not check the proof of the theoretical claims. Experimental Designs Or Analyses: I carefully reviewed the experiment design and ablation study. The experiment designs are sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper this tackles the important problem of handling noisy preference samples. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written and structured. The theoretical analyses are clearly presented with appropriate mathematical formulations. The experimental methodology is thoroughly explained, though some sections could be more concise. The technical quality is high, with rigorous proofs and comprehensive experiments. The ablation studies effectively isolate the contributions of different components. The approach is novel and represents a significant advancement over existing methods. While it builds on DPO, the integration of noise-tolerance, filtering, and rejection sampling is innovative and well-executed. Other Comments Or Suggestions: I don't have additional comments/suggestions beyond those provided in other review sections. Questions For Authors: 1. How would ROPO perform in scenarios with non-uniform or clustered noise patterns (e.g., where certain types of queries are more prone to noisy preferences)? 2. Could the robustness-guided rejection sampling be extended to incorporate more diverse negative examples beyond the model's own generations? Would it be beneficial to include negative samples from additional sources? 3. How does ROPO handle cases where legitimate preferences might appear contradictory due to subjective differences rather than noise? It might be good to include a discussion on relationship with pluralistic alignment. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer mD6s, Thank you for your valuable review. We respond to each comment as follows and sincerely hope that our response can properly address your concerns. Figures and Tables can be found in **mD6s.md** in **https://anonymous.4open.science/r/ICML25-ROPO-F6CD** --- > Q1: How would ROPO perform in scenarios with non-uniform or clustered noise patterns? **Res:** In addition to the artificial uniform noise, our experiments also contain **two practical settings** that include **non-uniform or clustered noise patterns**. ROPO still outperforms all baselines under these two following settings. 1. (*Appendix E.3.1*) Practical noise coming from human annotators' trust in larger models over smaller ones. It is common practice to treat the response from a larger model as the preferred one and the response from a smaller model as the dis-preferred one. This leads to **non-uniform and clustered noise patterns**, as preference noise tends to appear in problems that large models do not handle well but small models handle effectively, rather than being uniformly distributed. 2. (*Appendix E.3.2*) Practical noise coming from LLM comparisons. We use Llama3-70B-Instruct to relabel the preferences in UFB, where the original preference label in the UFB dataset comes from GPT-4 rating. Then, we observe that about 30% of the labels are different from the original ones. The noise here is **non-uniform and clustered** rather than uniform, as differing preference labels only arise in cases where Llama-3-70B-Instruct and GPT-4 fail to reach a consensus. > Q2: Could the robustness-guided rejection sampling be extended to incorporate more diverse negative examples beyond the model's own generations (e.g., samples from additional sources)? **Res:** We have added the experiment of training Mistral-7B on UFB with 0% and 20% artificial noise, where the rejection sampling phase uses outputs from Llama-2-7B to obtain negative examples. However, as shown in **Table mD6s-1** in the anonymous link, we observe a decrease in the performance. We speculate the reasons as follows. 1. Recent studies [1,2] suggests that on-policy training, where responses are sampled from the model’s distribution, generally outperforms off-policy training, where responses are sampled from other distributions. Therefore, the on-policy training paradigm of standard ROPO is naturally superior to the off-policy training paradigm that uses outputs from other models (e.g., Llama-2-7B) as dispreferred responses. 2. Another line of research [3,4] indicates that the relationship between "the value of preference samples (x,y1,y2)" and "the reward margin between y1 and y2" remains inconclusive. It is unclear which reward margins are effective for preference alignment. We speculate that this is also one of the reasons why using generations from other models as dispreferred responses is often ineffective---because we lack a clear understanding of the distribution and impact of reward margins in this scenario. > Q3: How does ROPO handle cases where legitimate preferences might appear contradictory due to subjective differences rather than noise? It might be good to include a discussion on relationship with pluralistic alignment. **Res:** We will include the following discussion on pluralistic alignment in Appendix C. Human preferences in the real world are often multi-dimensional and vary significantly due to differences in cultural background, education level, age, and region. This diversity in preferences has prompted the study of pluralistic alignment [5,6]. However, defining "noise" in pluralistic alignment is challenging. When preference dimensions exceed one, there is no "gold" and latent reward model to rely on, and preference modeling based on the Bradley-Terry model becomes infeasible. Therefore, no groundtruth preference label exists between two responses, so we cannot define "preference noise". To address this challenge, two promising directions can be considered: (1) Inject multiple preference dimensions into the prompt, enabling alignment conditioned on specific preference dimensions [7]. (2) Introduce an additional explanatory text in samples (x, y1>y2) to describe in what sense y1 is superior to y2. Although pluralistic alignment is beyond the scope of the paper, we look forward to engaging in interesting discussions with readers on this topic. --- [1] Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study [2] RLHF Workflow: From Reward Modeling to Online RLHF [3] Larger or Smaller Reward Margins to Select Preferences for Alignment? [4] Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning [5] Group Robust Preference Optimization in Reward-Free RLHF [6] Aligning to Thousands of Preferences via System Message Generalization [7] Rewards-in-Context: Multi-Objective Alignment of Foundation Models with Dynamic Preference Adjustment
Summary: LLM model alignment has shown great potential for several applications. However, popular techniques such as DPO are highly sensitive towards positive vs negative samples, and therefore any noise in the training preference data can significantly impact the performance. To alleviate this issue, the paper proposed an optimization framework for selecting noisy samples and then develop an augmented DPO loss function that is noise tolerant and can distinguish the noisy samples. Experimental results on several academic benchmark dataset demonstrate that proposed ROPO technique can almost always outperform base DPO and its variant. Claims And Evidence: The claims about noisy data filtering and robust framework for LLM preference optimization have been validated both theoretically and experimentally. Methods And Evaluation Criteria: Experiments are conducted on 3 well-known academic benchmark datasets and the results were compared against SOTA baseline DPO methods. Evaluation metrics and overall experimental settings make sense to me. Theoretical Claims: The claim that ROPO is noise-tolerant as opposed to DPO has been proved theoretically using Theorem 3.1-3.5. Experimental Designs Or Analyses: Experiments are carefully designed. The performance is tested on Alpaca, UFB and TLDR dataset and compared against DPO, IPO, cDPO and rDPO. Table 1 demonstrate that ROPO always outperforms other baselines (albeit by a small margin). More interestingly, the performance of ROPO increases monotonically over the iterations. Ablation studies show the importance of different steps in the ROPO framework. Supplementary Material: I have read the supplementary material at a high level and might have missed some of the mathematical proofs in Appendix F. Relation To Broader Scientific Literature: Preference selection and domain alignment of LLMs is an important problem and will have broader interest in the community. This work improves the performance of traditional DPO methods and will be of interest for both academic and industry audience. Essential References Not Discussed: NA Other Strengths And Weaknesses: Overall, ROPO is an interesting framework for improving performance over DPO. Concept of noisy data filtering and rejection sampling in DPO is novel. Experimental and theoretical results are also solid. Having said that, there are some concerns: 1. Experiments are only conducted on 2 7B models and therefore, it is not clear how ROPO performance generalizes to other bigger models. 2. ROPO is an iterative method, therefore, in terms of computational complexity and cost, it should be much higher than DPO. It would be good to add the cost comparison between ROPO, IPO and DPO. 3. Experimental results demonstrate that the performance gain on 2 datasets (UFO and Alpaca) is only 1-2%, but ROPO seems more cost inefficient over DPO. Results on TLDR dataset seem very promising, so any discussion or insights on where ROPO shines over DPO and vice versa would be a good addition. Other Comments Or Suggestions: Structure of the paper requires a lot of changes. Algorithm 1 and related work section should go into main text. Questions For Authors: 1. Do you have performance analysis on larger models (except for 7B)? 2. What is the computational complexity of ROPO? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer smYA, Thank you for your valuable review. We respond to each comment as follows and sincerely hope that our response can properly address your concerns. Figures and Tables can be found in **smYA.md** in **https://anonymous.4open.science/r/ICML25-ROPO-F6CD** --- # Other Strengths And Weaknesses > W1: It is not clear how ROPO performance generalizes to models larger than 7B. **Res:** In **Appendix E.1** of our initial submission, we show the performance of ROPO on Llama-2-13B and Llama-3-70B (trained on UFB and evaluated on AlpacaEval). We will make references to Appendix E.1 in the main text more obvious. For you convenience, we also provide the results in **Table smYA-1** in the anonymous link. As can be seen from the table, ROPO significantly outperforms the baselines at the scales of 13B and 70B. --- > W2: It would be good to add the cost comparison between ROPO, IPO, and DPO. **Res:** In **Appendix A** of our initial submission, we provide the analysis of the computational cost for ROPO and **non-iterative methods (e.g., DPO, IPO, rDPO, and cDPO)**. We will make references to Appendix A in the main text more obvious. For your convenience, we quote the important content as follows. ROPO introduces additional costs for the noisy sample filtering and robustness-guided rejection sampling stages compared with non-iterative methods. We estimate that **the cost of ROPO is approximately 1.6 times that of non-iterative methods.** The additional costs of ROPO mainly come from forward computations, which are acceptable compared to the training (backward) cost and almost negligible in the entire chain of real-world large-scale LLM training. For details, please refer to Appendix A. --- > W3: Any discussion or insights on where ROPO shines over DPO (like on TL;DR) and vice versa would be a good addition. **Ans:** We speculate that the extent of ROPO's advantage over DPO depends on whether the task is subjective. - For tasks that are relatively more objective (such as the TL;DR summarization task), the groundtruth preference ranking labels are usually more deterministic, as the criteria for evaluating the quality of a summary are typically objective and quantifiable, such as whether it contains complete information and whether the numbers and other details are accurate. In such tasks, flipping preference labels can easily **provide the model with incorrect information**. Therefore, ROPO has greater potential compared to DPO in such tasks. - For tasks that are relatively more subjective (such as dialogue generation), it is often difficult to definitively say that one response is better than another for most questions, as the conclusions of preference comparisons can be influenced by factors such as the evaluator's cultural background, education level, age, etc. In such tasks, flipping preference labels **does not necessarily introduce "incorrect" information** to the model. Therefore, the advantage of ROPO over DPO may not be as significant in these tasks. Additionally, we would like to share our observation that may help explain why ROPO still demonstrates a significant advantage on TL;DR with 0% artificial noise. According to the estimation in Table 5 of Appendix C, the original TL;DR dataset **inherently contain 21.3%-27.0% noise**. We also observe that due to the use of different annotators to label preferences for TL;DR, 5.8% of the posts exhibit "cyclic preferences" among multiple summaries. That is, for a given post $x$, the preference ranking among three summaries is $y_1 \succ y_2, y_2 \succ y_3, y_3 \succ y_1$, which is evidently a form of noise. In such cases, ROPO naturally outperforms DPO. --- # Other Comments Or Suggestions > C1: Algorithm 1 and Related Work should go into main text. **Res:** We will put them in the main text by reorganizing the presentation. --- # Questions For Authors > Q1: Do you have performance analysis on larger models (except for 7B)? **Res:** Please refer to our response to W1. --- > Q2: What is the computational complexity of ROPO? **Res:** Please refer to our response to W2.
null
null
null
null
null
null
Fast and Robust: Task Sampling with Posterior and Diversity Synergies for Adaptive Decision-Makers in Randomized Environments
Accept (poster)
Summary: The authors study the problem of robust reinforcement learning through the lens of meta-RL in which the trained (meta) agent receives a few task-specific samples (can be zero) using which it adapts to a new task. The objective is to maximize the expected performance conditioned on the sampled task being in a specific bottom quantile w.r.t. performance (CVaR). The proposed approach is based on the RATS framework and an existing approach called MPTS which continuously learns a model (along with the meta-agent) that can be used to predict the expected reward (or loss) of the agent on a new task given historical information about the samples used to train the agent. This predictive model is then used to select new tasks for the next iteration of the meta-RL algorithm. The main contributions of this paper are (i) formulation of the task selection problem (which tasks to use in each iteration of the meta-RL algorithm) as a higher-level reinforcement learning problem in which the actions are subsets of tasks, (ii) connecting the task-selection algorithm of MPTS to UCB, and (iii) proposing a new task-selection procedure that includes diversity of selected tasks in its objective. Claims And Evidence: Yes. The theoretical claims seem sound (though I haven't checked proofs in Appendix carefully) and the experiments show improved performance over existing approaches. The authors do not claim anything that is not backed by evidence in the form of experiments and theory. Methods And Evaluation Criteria: Yes. The authors use standard evaluation methods and use standard RL environments in the experiments. Theoretical Claims: I did not reach the proofs in the Appendix (only briefly glanced over them). Experimental Designs Or Analyses: The experiments appear sound and there is no major issue with the evaluations. Supplementary Material: Briefly looked at the appendix for some definitions and high level proof ideas. Did not check the code. Relation To Broader Scientific Literature: - Overall, framing the task-selection problem as an RL problem appears novel and could potentially provide a framework for future work on meta learning. The authors also showed how prior work like MPTS fits into their framework. - Insight on task diversity and how the selection criterion can lead to selecting tasks from a narrow range of values is interesting and applicable more broadly. - The proposed approach is agnostic to the specific meta-RL algorithm used and the authors show this using experiments based on different meta-RL algorithms in the literature. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: ### Additional Strengths - The experimental results appear convincing. The authors show clear improvements in both (i) the final performance and (ii) the sample-efficiency of learning. - The approach is shown to be lightweight and offers performance improvements with minimal impact to cost. - The paper is well-written and prior concepts related to MPTS are explained well. There does not appear to be any major weakness that would impact acceptance. Some minor comments are below. Other Comments Or Suggestions: - I think the term "secret MDP" is a bit confusing as it suggests that there is a secret environment in which we want the agent to perform well. Something like task-selection MDP or meta-MDP is maybe less confusing. - As someone with no prior background on MPTS, it was a little hard to figure out which parts are "new" in this paper. I had to look at the MPTS paper to clearly figure this out. It would be great if this can be clarified in the paper. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **_We sincerely appreciate Reviewer beLW's efforts and recognition of our work. Below, we improve the manuscript based on beLW's feedback._** ___ **1. Terminology clarity about secret MDP** Thank you for your valuable feedback. We used the term "secret MDP" to highlight that we are the first to model robust active task sampling as an MDP and solve it with developed i-MABs. We will further explain this point in the revised manuscript and consider using a more intuitive term, such as "task-selection MDP" to avoid potential confusion. **2. Comparison betwen PDTS and MPTS** Thank you for the suggestion. Currently, we primarily summarize the orthogonal contributions of MPTS and PDTS in Table 1. For example, MPTS develops a VAE-like risk predictive model to achieve nearly $\text{CVaR}_\alpha$ optimization but suffers from the concentration issue. Our PDTS (i) introduces *a versatile theoretical tool i-MAB* to achieve a more robust solution, i.e., nearly worst-case optimization for meta RL and DR cases, (ii) *resolve the concentration issue*, and (iii) offer *easier implementation with stochastic optimism*. These three contribution points are new and increase scalability of RATS. Encouragingly, PDTS with i-MABs provides stable and tractable scheme for worst-case optimization in adaptive decision-making, broadening its applicability. In the revised manuscript, we will involve more RATS background and further highlight the significance of PDTS for more general readers. ___ **_Once again, thank you for your valuable review and thoughtful recognition of our work. Your feedback is greatly appreciated and has significantly improved our manuscript. We hope our responses sufficiently answer your questions._** --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response which answered my questions. As mentioned in the other reviews, I would encourage the authors to also include a discussion on limitations in the paper. --- Reply to Comment 1.1.1: Comment: Sure ^.^. **We will take all suggestions into the revised version, including the mentioned empirical findings and limitation summary.** We hope the i-MABs in this work will facilitate the algorithm design of efficient robust adaptation and release the power of reinforcement learning in large-scale decision-making. Importantly, all suggestions and questions are constructive in improving our manuscript and we thank reviewers and area chairs efforts in this work.
Summary: The paper focuses on adaptation robustness, addressing scenarios where a risk-predictive model is utilized to mitigate intense evaluation requirements. It formulates the robust active task sampling (RATS) problem as a partially observable Markov decision process (POMDP), providing theoretical insights into the problem. Empirically, the paper demonstrates that the proposed method, Posterior-Diversity Synergized Task Sampling, achieves stronger performance in vision-based reinforcement learning tasks compared to baseline methods. Claims And Evidence: The claims are supported by empirical results. One question I have is regarding the introduction, where the authors state that their method requires less complex configurations compared to prior works. Could the authors clarify what is meant by ‘configurations’? Providing additional context and specific examples would help in understanding this claim Methods And Evaluation Criteria: The methods and evaluation protocol are mainly based on meta-RL literature and robust RL literature, which makes sense to me. Theoretical Claims: I did not evaluate the theoretical analysis as I am not familiar with this field. Experimental Designs Or Analyses: The experimental design appears sound to me. A general question I have is regarding the limitations of the proposed method. What are the potential failure cases if the assumptions made in the analysis do not hold? A discussion on these aspects would help in understanding the robustness and applicability of the approach. Supplementary Material: Yes, I briefly reviewed the code Relation To Broader Scientific Literature: Robust adaption is very important in real-world applications, especially robotics applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please see previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: _**We sincerely appreciate Reviewer v1xi's efforts and positive feedback. Below, we provide our responses.**_ ___ **1. Clarification on the simplification of configurations** Apologies for any confusion. The simplification of configurations lies in two aspects: - As stated in Lines 261–270 of Section 3.2, MPTS requires careful tuning of the candidate task batch size $\hat{\mathcal{B}}$. In contrast, PDTS eliminates this requirement and remains scalable even under extreme worst-case optimization (e.g.,$\hat{\mathcal{B}} = 64 \times \mathcal{B}$) by introducing a diversity-regularized acquisition function to mitigate the concentration issue. - As stated in Lines 281–284 of Section 3.3, we use posterior sampling to utilize stochastic optimism while avoiding the calibration of exploration and exploitation weights in subset search, which is required by UCB-based methods. We will highlight these in the revised manuscript. **2. Discussion on assumptions failure** Thank you for your thoughtful question. The assumptions in our analysis are consistent with those in prior works[1,2], as stated in Appendix A. If they do not hold, the effectiveness of the risk predictive model may degrade with extremely lower Pearson correlation coefficients, potentially impacting robust optimization. Fortunately, our experiments demonstrate that PDTS achieves higher PCC values, strong robust optimization performance, empirically supporting the validity of these assumptions. We will incorporate this discussion into the revised manuscript for clarity. ___ **_Once again, thank you for your valuable review. Your feedback is greatly appreciated and has helped improve our manuscript. We hope our responses adequately address your concerns, and we would be grateful if you could consider raising the score._** ___ **References:**\ [1] Greenberg I, Mannor S, Chechik G, et al. Train hard, fight easy: Robust meta reinforcement learning[J]. Advances in Neural Information Processing Systems, 2023, 36: 68276-68299.\ [2] Wang, Q., Lv, Y., Xie, Z., & Huang, J. (2023). A simple yet effective strategy to robustify the meta learning paradigm. Advances in Neural Information Processing Systems, 36, 12897-12928.
Summary: This paper tackles robust active task sampling (RATS) in domain randomization or meta-RL for worst-case performance. Tasks are viewed as arms in an infinite multi-armed bandit, but the existing MPTS can over-concentrate on top-B tasks. The authors propose PDTS, which replaces UCB-based acquisition with posterior sampling and adds a diversity term. Experiments show PDTS achieves faster, more robust adaptation than baselines (ERM, DRM, GDRM, MPTS) on MuJoCo and domain-randomized robotics tasks. Claims And Evidence: Overall, the claims have coherent theoretical backing and empirically strong results across multiple benchmarks. Methods And Evaluation Criteria: The evaluation criteria make sense for risk-averse policy adaptation. The chosen suite of MuJoCo and robotic tasks is widely used in domain randomization and meta-RL research, so the evaluation is aligned with standard practice. Theoretical Claims: All theoretical results are plausible and consistent with standard concepts in bandit theory and set diversification. There is no obvious flaw in these short formal statements. Given the scope of the paper, the claims appear correct, and the sketches/logic are standard enough that no glaring issues stand out. Experimental Designs Or Analyses: The experimental design is thorough and appropriate for the proposed method, supporting the authors’ conclusions about robust adaptation performance and sample efficiency. Supplementary Material: I have briefly gone through the supplementary material, including additional theoretical details, proofs, and clarifications of the risk-predictive model. Relation To Broader Scientific Literature: The paper situates itself among: - Risk-averse RL frameworks (DRM/CVaR, GDRM, etc.). - Meta-RL approaches. Essential References Not Discussed: No major omissions jump out. The paper cites standard domain-randomization, risk-averse RL, and meta-learning literature. Other Strengths And Weaknesses: - Strengths: 1. The i-MAB perspective is novel and successfully integrates RATS with risk-averse RL under a cohesive theoretical argument. 2. The PDTS method is simple but effectively addresses subset concentration by blending posterior sampling and diversity. 3. The empirical evaluation is robust, spanning multiple benchmarks (both symbolic and visual), and consistently demonstrates performance improvements. - Weaknesses: 1. Relying on a large pseudo batch size $\hat{B}$ for nearly worst-case coverage can introduce computational overhead. 2. The new diversity regularization parameter $\gamma$ may require careful tuning. 3. The method heavily depends on a risk-predictive model. If that model’s performance degrades, PDTS coverage might fail to accurately capture the most challenging tasks. While correlation results are encouraging, model reliability remains a potential concern. Other Comments Or Suggestions: - Clarifying the best practices for choosing the diversity weight $\gamma$ or the approximate search method might help practitioners. - Additional ablations on how the quality of the risk-predictive model $p(\ell|\tau)$ influences PDTS’s final performance could further strengthen the discussion. Questions For Authors: 1. Scaling to high dimensions: How does PDTS handle extremely high-dimensional task spaces? Do you have any heuristic or projection strategy if the dimension is large? This would clarify how widely PDTS can be applied in large-scale real-world DR or meta-RL scenarios. 2. Diversity Regularization: In practice, how sensitive is the method to the diversity weight $\gamma$? Would you expect the best $\gamma$ to scale with $\hat{B}$ or with dimension $d$? If so, how? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: _**We sincerely appreciate Reviewer b9NR's efforts and constructive feedback. Below, we provide our responses.**_ ___ **1. Additional computational overhead** Thanks for precious comment. We quantitatively analyzed the extra computational overhead in Fig.5. Even with a $64\times$ candidate batch (i.e., $\text{CVaR}_{1-1/64}$ approximating the worst case), **the additional computational overhead remains negligible due to the efficiency of the risk predictive model—its cost is significantly lower than that of agent-environment interactions and policy optimization in MetaRL or DR**. We will emphasize this point in the revised manuscript. **2. Ablation study on the risk predictive model** Thank you for the valuable question—it has been very helpful to our analysis. We conducted an ablation study to analyze the impact of the risk predictive model. We designed two variants, **PDTS-Deep and PDTS-Shallow**, by increasing and decreasing the number of encoder-decoder layers, respectively. Additionally, we replaced the encoder-decoder structure with an MLP to create **PDTS-MLP**. Results on Walker2dVel are summarized in the table below: |Methods|$\text{CVaR}_{0.9}$|$\text{CVaR}_{0.7}$|$\text{CVaR}_{0.5}$|$\text{Average}$| |-|-|-|-|-| |ERM|-69.77$\pm$7.62|-31.73$\pm$7.82|-3.78$\pm$6.66|38.88$\pm$4.73| |PDTS|**-22.42$\pm$3.13**|**2.86$\pm$3.04**|16.57$\pm$2.93|40.40$\pm$3.07| |PDTS-Deep|-30.51$\pm$7.11|0.55$\pm$5.69|**17.99$\pm$4.8**|**44.42$\pm$3.69**| |PDTS-Shallow|-41.24$\pm$4.74|-11.34$\pm$4.5|3.92$\pm$4.33|33.64$\pm$4.01| |PDTS-MLP|-41.07$\pm$4.88|-12.04$\pm$5.06|3.38$\pm$4.94|32.96$\pm$4.58| From these results, we observe: - All PDTS variants achieve better task robust adaptation, confirming the effectiveness of PDTS and its generality across different risk predictive models. - The encoder-decoder architecture generally outperforms MLP-based models, supporting the rationale behind this design. - Deeper networks may introduce a performance-robustness trade-off in the current setting, which we plan to further investigate in more complex scenarios. - Weaker risk prediction models degrade overall performance, due to poorer difficult MDP identification. **3. Scaling to high dimensions** Thank you! This is a very insightful and important question. Since RATS is still in its early stages, we are actively exploring its scalability to high-dimensional task spaces. We agree that heuristic or projection strategies could be viable solutions. One potential approach is to leverage lightweight general-purpose embedding models, such as WordLLaMA[1], to compress high-dimensional task identifiers from language or vision modalities[2]. We appreciate your insight and will continue to investigate this direction further to broaden the application scope of PDTS. **4. Sensitivity of hyperparameter $\gamma$ and tuning practices** We conducted an ablation study on $\gamma$ in Figure 13 and found that **PDTS is relatively stable to $\gamma$ within a certain range**. Below, we summarize two **key tuning recommendations** - In most cases, setting $\gamma$ to 1 or a nearby value secures superior enough performance. - For scenarios with an extremely low-dimensional task identifier, increasing $\gamma$ appropriately may improve performance. We will incorporate these recommendations into the revised manuscript and specify the $\gamma$ values used in practice in the released code to aid practitioners. **5. Would you expect the best $\gamma$ to scale with $\hat{\mathcal{B}}$ or with dimension $d$?** We suggest using a larger candidate batch, $\hat{\mathcal{B}}$, to better capture the worst-case scenario and improve performance, as shown in Figure 13. Therefore, we set $\hat{\mathcal{B}} = 64 \times \mathcal{B}$ in all main experiments without delicate tuning and believe it's unnecessary to co-tune it with $\gamma$. We hypothesize that **$\gamma$ scales positively with $\hat{\mathcal{B}}$**, as larger batches exacerbate concentration issues. Additionally, we expect that, in comparable scenarios, **a smaller $d$ leads to a larger $\gamma$**, as concentration issues become more likely. For example, we use $\gamma = 1$ for the task with 2D task identifier (Walker2dMassVel) and $\gamma = 5$ for the 1D task (Walker2dVel). ___ _**Once again, thank you for your valuable review and thoughtful recognition of our work. Your feedback is greatly appreciated and has significantly improved our manuscript. We hope our responses sufficiently address your concerns.**_ ___ **References:**\ [1] Miller, D. L. (2024). WordLlama: Recycled token embeddings from large language models. https://github.com/dleemiller/wordllama \ [2] Kim M J, Pertsch K, Karamcheti S, et al. Openvla: An open-source vision-language-action model[J]. arXiv preprint arXiv:2406.09246, 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, which satisfactorily addressed my questions. I am maintaining my initial score for acceptance. --- Reply to Comment 1.1.1: Comment: We thank Reviewer b9NR for constructive suggestions and kind replies once again. We'll incorporate the mentioned discussions and statistical results in the updated manuscript.
Summary: This paper studies a robust active task sampling (RATS) paradigm, models it as an infinitely many-armed bandit (i-MAB) problem, and proposes a novel method called Posterior and Diversity Synergized Task Sampling (PDTS). PDTS mitigates the task concentration issues in an existing approach, Model Predictive Task Sampling (MPTS), by incorporating a diversity regularized acquisition function, replacing the upper confidence bound acquisition function in MPTS. As a result, PDTS enables exploration in a broader range of tasks and improves robustness for nearly worst-cases. The authors conduct extensive experiments in various meta RL settings, and show that PDTS improves CVaR robustness, sample-efficiency for average return, and zero-shot adaptation in out-of-distribution tasks. Claims And Evidence: Claims in terms of mitigating the task concentration issue in MPTS, computational efficiency, versatility of the framework, improved robustness, and zero-shot adaptation are well supported through theoretical and empirical evidence. Although theoretically justified, empirical evidence supporting proposition 3.4 is missing. An interesting ablation would be observing the trade-off between improved nearly worst-case robustness and computational efficiency of PDTS. Methods And Evaluation Criteria: They make sense when evaluating CVaR robustness, sample-efficiency, and zero-shot adaptation in meta RL. Theoretical Claims: Yes, I checked the correctness of proofs of propositions 3.2, 3.3, and 3.4. I haven't seen any issues. Experimental Designs Or Analyses: I checked the experimental designs behind emta RL, domain randomization, and visual domain randomization settings. I haven't seen any issues. Supplementary Material: I briefly checked the supplementary material, which consists of an online anonymized repository. Relation To Broader Scientific Literature: The i-MAB model provides a theoretical framework for active task sampling, which is useful for risk-averseness in many decision-making problems, such as reinforcement learning and robotics, where robust adaptation is key for real-world applicability. Essential References Not Discussed: The paper discusses various task sampling methods for providing risk-averseness and robustness, yet it does not mention curriculum learning at all. For example, in the paper that presents RoML (Greenberg et. al.), which the authors evaluate in their documents, curriculum learning methods are also studied and discussed. I believe a discussion of curriculum learning methods would broaden the audience of this work. Here are a few of those instances: Dennis, M., Jaques, N., Vinitsky, E., Bayen, A., Russell, S., Critch, A., & Levine, S. (2020). Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33, 13049-13061. Jiang, M., Dennis, M., Parker-Holder, J., Foerster, J., Grefenstette, E., & Rocktäschel, T. (2021). Replay-guided adversarial environment design. Advances in Neural Information Processing Systems, 34, 1884-1897. Koprulu, C., Simão, T. D., Jansen, N., & Topcu, U. (2023, July). Risk-aware curriculum generation for heavy-tailed task distributions. In Uncertainty in Artificial Intelligence (pp. 1132-1142). PMLR. Other Strengths And Weaknesses: Strengths: - The paper clearly structures and explains the motivation, the problem, the proposed method and the experimental results. - Introduction of i-MABs as a model for RATS provides a versatile theoretical framework. - PDTS mitigates key issues in an existing method, MPTS, and provides computational efficiency. - Theoretical results are clearly explained, and empirical evaluation presents strong evidence in favor of the proposed method in improving risk-averseness, robustness and generalization capabilities of meta RL agents. Weaknesses: - The flow of Section 2 and 3 degrades as more notation is used. - There is no ablation study to justify nearly worst-case optimization, as proposition 3.4 suggests. Although the authors prove the proposition, an interesting study would be on the trade-off between worst-case performance of PDTS and the computational costs of increasing the cardinality of the set of candidate tasks. - There is no discussion on the limitations of the introduced model and the proposed method. Other Comments Or Suggestions: I didn't see any typos. However, I highly recommend that the authors refine their use of symbols. Section 2, and most importantly, Section 3, is very hard to read. Questions For Authors: Does the meta RL agent have access to task identifiers? I assume it does not. But then I'm confused how they can be utilized in Eq. (11) to measure the diversity of candidate tasks. As the task identifiers are assumed to be partially observable in meta RL, I think a clarification is needed here. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: _**We sincerely thank Reviewer xw4Y's efforts and thoughtful feedback. Below, we provide our responses.**_ ___ **1. Discussion of curriculum learning methods** Thank you for your valuable suggestion. We'll cite these important works and discuss them in the revised manuscript, adding contents below to Line 667: > Curriculum learning is also a crucial topic related to adaptive decision-making. Dennis, Michael, et al. [1] develop unsupervised environment design (UED) as a novel paradigm for environment distribution generation and achieve SOTA zero-shot transfer. Jiang, Minqi, et al. [2] cast prioritized level replay to enhance UED and formulate dual curriculum design for improving OOD and zero-shot performance. In [3], heavy-tailed distributions are incorporated into the automated curriculum, which leads to robustness improvement. In contrast, our work emphasizes robust task adaptation. Integrating the idea of surrogate evaluation from PDTS into curriculum design could be an interesting direction for future research. **2. Writing optimization** Thank you for your constructive suggestion. We will further simplify the math notations in Sections 2 and 3 to improve comprehension. For instance, instead of introducing the new notation $\mathbf{R}$, we will use the more intuitive summation form of the cumulated return to enhance clarity. **3. Trade-off between worst-case performance and the computational costs** Thanks for the insightful comment. This is a crucial point that we also investigated in Lines 432–438, as well as Figures 5 and 13. Figure 5 demonstrates that even with a $64\times$ candidate batch (i.e., $\text{CVaR}_{1-1/64}$ approximating the worst case), the **additional computational overhead remains negligible due to the efficiency of the risk predictive model—its cost is significantly lower than that of agent-environment interactions and policy optimization in MetaRL or DR**. Figure 13 shows that increasing the candidate batch size improves nearly worst-case performance. Therefore, we adopt the $64\times$ pseudo batch consistent in all main experiments without extra adjustments. We will emphasize this point in the revised manuscript. **4. Limitations discussion** Thank you for your insightful feedback. We briefly discussed the limitations of the proposed method in the Conclusion (Line 435-438) and Appendix E (Line 1347-1348). Specifically, our approach (1) relies on the risk predictive model, for roughly scoring task difficulties, and (2) depends on identifier information and the inherent smoothness of the adaptation risk function, which might not hold in restricted scenarios. We will provide a more detailed discussion in the revised manuscript. **5. Meta-RL agent have access to task identifiers** Apologies for the confusion. We will clarify this in the revised manuscript: The task identifiers are indeed visible in Meta-RL, as is consistent with prior works such as ROML [4] and MPTS [5]. Detailed information on the task identifiers can be found in Table 3. ___ _**Once again, thank you for your valuable review. Your feedback is greatly appreciated and has helped improve our manuscript a lot. We hope our responses adequately address your concerns, and we would be grateful if you would reconsider the evaluation and update the score accordingly.**_ ___ **References:**\ [1] Dennis, M., Jaques, N., Vinitsky, E., Bayen, A., Russell, S., Critch, A., & Levine, S. (2020). Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33, 13049-13061.\ [2] Jiang, M., Dennis, M., Parker-Holder, J., Foerster, J., Grefenstette, E., & Rocktäschel, T. (2021). Replay-guided adversarial environment design. Advances in Neural Information Processing Systems, 34, 1884-1897.\ [3] Koprulu, C., Simão, T. D., Jansen, N., & Topcu, U. (2023, July). Risk-aware curriculum generation for heavy-tailed task distributions. In Uncertainty in Artificial Intelligence (pp. 1132-1142). PMLR.\ [4] Greenberg I, Mannor S, Chechik G, et al. Train hard, fight easy: Robust meta reinforcement learning[J]. Advances in Neural Information Processing Systems, 2023, 36: 68276-68299.\ [5] Wang Q C, Xiao Z, Mao Y, et al. Beyond Any-Shot Adaptation: Predicting Optimization Outcome for Robustness Gains without Extra Pay[J]. arXiv preprint arXiv:2501.11039, 2025. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns and questions have been adequately addressed. I will change my score to Accept. I hope to see the changes promised by the authors in the final version, as I believe they will greatly improve the readers' experience. --- Reply to Comment 1.1.1: Comment: We thank Reviewer xw4Y for insightful comments. We'll polish the manuscript and incorporate these suggestions inton the updated version as mentioned.
null
null
null
null
null
null
Learning Optimal Multimodal Information Bottleneck Representations
Accept (poster)
Summary: The author introduces a theoretically guaranteed multimodal information bottleneck approach. This method dynamically adjusts the regularization weights of each modality by considering the varying degrees of task-relevant information across different modalities. Theoretically, the optimization objective proposed by the author is of a remarkably straightforward form, and the practical loss function serves as an upper bound to this theoretical objective, thereby ensuring the feasibility. #### update after rebuttal: I don't change my assessment. Claims And Evidence: Most of the methods and propositions in the article are supported by theoretical foundations. However, Equation (3) raises some questions, which will be elucidated in the ‘Questions For Authors’ Methods And Evaluation Criteria: The information bottleneck framework proposed by the author addresses the overfitting predicament inherent in multimodal learning, whereas the adaptive regulation of regularization weights effectively addresses the challenges posed by imbalanced learning scenarios. Theoretical Claims: I carefully checked the proofs related to section 5.1 (Appendix B) and skimmed through the proofs associated with section 5.2 (Appendix C). The proof section is quite rigorous, and there is no obvious issues. Experimental Designs Or Analyses: The simulated two-modality dataset is quite intriguing. However, it does not account for scenarios that require simultaneous decision-making across both modalities, such as sarcasm. For instance, consider $a, b \sim N(0, I_D)$, with the label $y = \Delta(a^T b > 0)$. In this case, $I(a; y) = I(b; y) = 0$, while $I(y; a, b) > 0$. For the audio-visual dataset, it would be beneficial to include datasets other than CREMA-D, such as KS or VGGSound. Regarding CREMA-D, simply increasing the number of training epochs can enable a straightforward fusion method like concatenation to achieve a score close to 70 with Resnet18. Therefore, to rule out the possibility that the method merely converges quickly, it is advisable to either increase the number of learning epochs or introduce additional datasets. Supplementary Material: As I mentioned earlier, I reviewed the sections in the appendix pertaining to the proofs and experiments (B, C, F, G, H). Relation To Broader Scientific Literature: The ingenious construction of the loss in the article and the proofs related to information theory in this section may provide valuable insights for future research in multimodal learning. A well-designed information bottleneck could also potentially benefit downstream tasks. Essential References Not Discussed: The article's citations are quite comprehensive. Other Strengths And Weaknesses: The article is logically structured, with clearly defined and reader-friendly symbols. The theoretical section is particularly detailed and rigorous. Other Comments Or Suggestions: Since the derivations are all included in the appendix and only the conclusions are presented in the main text, Properties 1 can be removed from the main text. Questions For Authors: In the experiments, the non-MIB-based methods only include some basic approaches. I am curious about how they compare with newer methods: Peng, Xiaokang, et al. "Balanced multimodal learning via on-the-fly gradient modulation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. Zhang, Xiaohui, et al. "Multimodal representation learning by alternating unimodal adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. I harbor a degree of skepticism towards the reasoning in Equation (3), which posits that concatenating $e_i$ enhances the model's learning by improving the signal-to-noise ratio. Typically, the introduction of signal-to-noise ratio considerations involves additive noise, such as ( z_i^{noise} = z_i + e_i ), rather than direct concatenation. Moreover, the ablation studies do not include relevant content to substantiate this. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Experimental Designs Or Analyses:** **Q1.** Thank you for this insightful observation regarding synergistic interactions between modalities. In response, we conducted additional experiments using synthetic data with two modalities ($x_1,x_2$), where $x_1=[a_0;b_0]$, $x_2=[a_1;b_1]$, and $y=\Delta(a_0^Ta_1>0)$. All $a_i$ and $b_i$ are sampled from the standard Gaussian, yielding 10,000 sample pairs. Theoretically, in this setting, the TRBs would be unable to distinguish between $a_i$ and $b_i$, leading to the encoder to randomly incorporate information. However, $L_{OMF}$ of the OMF block can capture the inter-modal synergic interactions and inform the encoders to extract task-relevant information. To validate this, we tested our method in two configurations: $L_{OMF}$ is either involved (*w-$L_{OMF}$*) or not (*wo-$L_{OMF}$*) in optimizing the modality encoders. For comparison, we also evaluated a single-modality baseline (*Single*) and the union of the two modalities (*Union*). The table below shows that *w-$L_{OMF}$* achieves the highest accuracy, followed by *Union* and *wo-$L_{OMF}$*, while *Single* performs near random guessing. These results align with our expectations and demonstrate OMF block's effectiveness in capturing inter-modal synergetic interactions. ||ACC| |-|-| |*Single*|0.487/0.489| |*Union*|0.626| |*wo-$L_{OMF}$*|0.725| |*w-$L_{OMF}$*|0.812| **Q2.** As you suggested, we increased the training epochs from 100 to 1,600 and observed that our method slightly improved, with macro-accuracy increasing from 63.6% to 65.4%. We agree that introducing KS or VGGSound datasets can help to more accurately evaluate our method. However, due to the limited rebuttal period and the very large data sizes involved, we plan to leave these results in the revised manuscript later. **Questions For Authors:** **Q1.** Thank you for pointing us to these recent methods. Since OGM-GE is originally designed for the classification task, we have included it as a benchmark for CREMA-D only (note that CMU-MOSI is a regression task, and anomalous tissue detection is an SVDD-based unsupervised task). Specifically, OGM-GE achieves a 61.0% accuracy on CREMA-D. Regarding MLA, we regret that its codes have been withdrawn, preventing us from adding it as a benchmark. Nonetheless, we have now discussed both methods in the Related Work and Experiment sections of this revision. **Q2.** Concatenation can be viewed as a generalized addition for combining the representation $z$ and noise $e$. To see this point, for concatenation, we have $[z^T,e^T] [\matrix{ W_1 \cr W_2}] =z^TW_1+e^TW_2$, which reduces to additive form $(z+e)^TW$ when $W_1= W_2$. Notably, conditional GAN (Mirza et al. 2014) also integrates noise vectors via concatenation for controlled generation. Moreover, using concatenation allows us to replace $e$ with the fused MIB $\xi$ during main training without modifying TRB's network architecture, thus offering more flexible cross-modal interactions compared to addition. Our ablation study on the CREMA-D dataset further confirms that the concatenation of $e$ (or $\xi$) with $z$ slightly outperforms a simple additive combination (macro-accuracy: 63.6\% vs. 62.4\%). **Other Comments Or Suggestions:** Properties 1 has been removed from the main text.
Summary: The paper proposes the OMIB framework to learn optimal multimodal information bottleneck (MIB) representations. It introduces a theoretically grounded objective that sets the regularization weight ($\beta$) within a derived bound and dynamically adjusts weights per modality (using parameter $r$) to balance imbalanced task-relevant information. OMIB combines modality-specific encoders with an optimal multimodal fusion (OMF) block that uses cross-attention, and it is implemented via a variational approximation. Experiments on synthetic data and other tasks (emotion recognition, sentiment analysis, and anomalous tissue detection) are conducted. Claims And Evidence: The authors support their claims with both theoretical proofs (for weight bounds and dynamic adjustment) and extensive experiments. Synthetic data validates the optimality of $β$ (Figure 3), and downstream experiments show consistent performance gains (e.g., +11.4% AUC improvement in anomalous tissue detection). However, some gains are modest, for instance, sentiment analysis on the CMU-MOSI dataset. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate. The framework is designed to address key MIB challenges (sufficiency vs. conciseness and modality imbalance), and the use of standard metrics (classification accuracy, AUC, F1) on diverse datasets (CREMA-D, CMU-MOSI, 10x-hBC) is well justified. Theoretical Claims: I checked the correctness of the proofs presented for Propositions 1–2. The derivations appear logically sound based on standard information theory properties and variational approximations. Experimental Designs Or Analyses: The experimental designs are generally sound. The paper evaluates OMIB on synthetic datasets to verify theoretical claims and on multiple real-world supervised tasks for empirical validation. The use of ablation studies (Table 6) to assess the impact of the warm-up phase, cross-attention, and the dynamic regularization factor is a strong point. Supplementary Material: I reviewed the supplementary material, which includes detailed descriptions of network architectures (Appendix H), experimental settings (Appendix I) and additional proofs (Appendix B and C). Relation To Broader Scientific Literature: The paper builds on existing work in multimodal fusion and the information bottleneck framework It addresses known limitations of ad hoc regularization by providing a theoretically derived bound and dynamic adjustment. This situates OMIB as a meaningful extension in the area. Essential References Not Discussed: The paper doesn't cite [1], which tackles a similar challenge but in a broader self-supervised learning context. That work defines optimal shared and modality-specific information, enabling the learning of disentangled multimodal representation spaces. It develops a theoretical framework to assess the quality of disentanglement, even in scenarios where the Minimum Necessary Information (MNI) cannot be achieved—a situation common in real-world applications. [1] https://openreview.net/pdf?id=3n4RY25UWP Other Strengths And Weaknesses: Strengths: - Introduces a rigorous theoretical foundation for dynamically balancing modality-specific information using a derived $\beta$ bound and parameter $r$. - Validates the approach across synthetic and supervised tasks, demonstrating consistent performance improvements. - Conducts ablation studies that clearly show the importance of key components (warm-up, cross-attention, OMF block). Weaknesses: - The theoretical bounds (e.g., $M_u$) reply on estimating $H_{v_i}$ and $I(v_i;v_j)$, which is non-trivial in practice. The paper briefly mentions using MINE but does not discuss robustness to estimation errors or scalability to high-dimensional data. - While the cross-attention network’s $O(N⋅M^2)$ complexity is noted, its impact on training/inference time and scalability to large-scale datasets is unexplored. Comparisons with lighter fusion mechanisms (e.g., late fusion) would strengthen practicality claims. - Modality Scalability: The extension to $>3$ modalities is mentioned but not empirically validated. - The work focuses on supervised learning. Extensions of the OMIB framework to semi-supervised or self-supervised scenarios are especially very important given the current landscape in AI. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Weaknesses** **W1.** Thank you for reminding us of this important point. **Robustness to estimation errors**: MINE is a theoretically validated estimator with strong consistency, which is achieved by optimizing the Donsker-Varadhan (DV) representation, a lower bound of the true MI (Belghazi et al., 2018). The optimal function that tightens the DV bound can be approximated using neural networks owing to universal approximation theorems, ensuring MINE's convergence to the true MI as sample size grows. Our experiments use datasets with relatively large sample sizes (e.g., Pathological Tissue (10,129)), the estimation error thus is expected to be minor. To further mitigate batch-induced bias, we apply exponential moving averaging on gradients during MINE’s optimization. Additionally, in its original study, MINE was shown to accurately estimate mutual information for generating superior single-modal IB for MNIST classification, empirically demonstrating its reliability. Finally, the theoretical bounds $M_u$ and $M_l$ are designed conservatively to tolerate estimation errors. For instance, $M_u$ is set to $\frac{1}{3 (H(v_1)+H(v_2)-I(v_1;v_2))}$, which intentionally tightens the upper bound to ensure $\beta$ remains within a safe range even if $H(v)$or $I(v_i;v_j)$ are slightly misestimated, thereby safeguarding convergence to optimal MIB. **Scalability:** MINE scales linearly with data dimensionality and sample size, as shown in its original work. Moreover, its computational cost is amortized since it is only used once to estimate $M_l$ and $M_u$ prior to training. **W2.** We apologize for the confusion. Actually, OMIB adopts the late-fusion strategy (the OMF block) similar to L-MIB (Mai et al. 2022), where unimodal representations are first condensed to reduce noises via variational IB encoders before fusion using a light CAN. This contrasts with the heavier early fusion occurring in the full information space. To empirically verify OMF's scalability to large-scale datasets, we generated six large synthetic datasets as in SIM-I–III and measured OMF's training and inference time per epoch (see the table below). The results indicate that OMF scales linearly with large-scale datasets. ||CAN|| |-|-|-| |Samples|Main Training Time per epoch (s)|Inference Time (s)| |1e+5|0.2|0.2| |2e+5|0.6|0.7| |4e+5|1.1|1.0| |6e+5|1.6|1.6| |8e+5|1.8|1.8| |1e+6|2.2|2.1| **W3.** Since most MIB studies consider up to three modalities, we followed this routine. To address your concern about modality scalability, we conducted an additional experiment on synthetic datasets with up to five modalities. Each modality consists of a shared task-relevant, a unique task-relevant, and a unique task-irrelevant component. To isolate the impact of the number of modalities, the dataset size was fixed at $10^5$. The observed time costs (see Table below) scale approximately quadratically with $M$, consistent with the expected $\mathcal{O}(M^2)$ complexity. |Modalities|Main Training Time (s)|Inference Time (s)| |-|-|-| |2|41|0.30| |3|82|0.64| |4|159|1.15| |5|267|1.88| **W4.** Thank you for this excellent comment, which coincides with the most significant challenge of deep MIB learning proposed by Shwartz-Ziv and LeCun (2023). Our primary goal is to establish the theoretical achievability of optimal MIB under the classical MIB paradigm, where downstream task labels are available. As noted by Tian et al. (2020), the optimal MIB inherently depends on the specific task, since what is relevant for one task may be irrelevant for another. So optimal MIB may not be well defined without task labels in the first place. In our formulation, both shared and Modality-Specific Task-Relevant (MSTR) contents are considered, which further complicates the extension to label-free settings. Regarding self-supervised learning (SSL), while recent methods (e.g., DISTANGLEDSSL) have attempted to learn optimal MIB representations without labels, they typically rely on the strong MultiView assumption (Sridharan et al. 2008) that neglects MSTR. When this assumption is violated, MSTR can be excluded. Although this issue can be mitigated by adding regularization to increase the MI between the representations and inputs, the achievability of task-specific optimal MIB is agnostic. Thus, the guarantee of achieving optimal MIB with SSL in the presence of MSTR could be unattainable. For semi-supervised learning, a potential approach is to train OMIB on the available labeled data and then propagate labels to unlabeled data, with regularization such as a prior label distribution from labeled data to enhance generalization. However, the achievability of optimal MIB is not guaranteed either. **Essential References Not Discussed** We also thank you for pointing us to the DISTANGLEDSSL paper, which offers valuable insights into the SSL-based MIB. We will include and discuss it in the related work section of the revised version.
Summary: This paper proposes OMIB, a novel framework for learning optimal Mutual Information Bottleneck (MIB) representations in multimodal learning. The authors address the challenge of imbalanced task-relevant information across modalities, which is a key issue in multimodal fusion. OMIB employs a dynamically weighted regularization strategy to optimize mutual information while mitigating redundancy and preserving modality complementarity. The approach leverages VAEs, CAN, and a two-phase training strategy to ensure efficient and adaptive multimodal representation learning. Theoretical derivations establish the conditions for achieving optimal MIB, and the framework is validated on both synthetic and real-world datasets across multiple tasks. Claims And Evidence: The paper makes several key claims: - OMIB achieves optimal MIB by dynamically balancing modality contributions using an adaptive weighting factor r. Supported by Proposition 2, which explicitly derives r and validates it with synthetic data experiments (SIM-{I-III}). - OMIB effectively reduces redundancy and preserves complementary information across modalities. Validated through ablation studies, which show significant performance degradation when OMIB’s fusion mechanism (OMF) or CAN is removed. - OMIB outperforms SOTA MIB-based and non-MIB-based fusion methods across multiple benchmarks. Empirical results on CREMA-D, CMU-MOSI, and tissue detection datasets . - OMIB maintains computational efficiency and scalability. Complexity analysis confirms O(N) time complexity and scalability tests. Methods And Evaluation Criteria: Yes, however, the datasets chosen (CREMA-D, CMU-MOSI, tissue detection) are strong, but not the hardest multimodal learning benchmarks. Theoretical Claims: - The authors provide a rigorous information-theoretic foundation and mathematically derive an upper bound for the mutual information regularization parameter, β. - Theoretical results ensure sufficiency, consistency, redundancy, complementarity, and specificity in learned multimodal representations. - Proposition 3 validates the achievability of optimal MIB, but a more detailed analysis of robustness conditions (e.g., adversarial robustness) is missing. Experimental Designs Or Analyses: Strengths: Synthetic Experiments (SIM-{I-III}), comparison against SOTA methods on real-world datasets, ablation studies and complexity analysis. Weaknesses: No explicit handling of missing or corrupted modalities, no robustness analysis for biased datasets, generalization to unseen datasets or domains is not tested. Supplementary Material: Did not check. Relation To Broader Scientific Literature: The related work section is thorough and well-referenced. Essential References Not Discussed: The related work section is thorough. Other Strengths And Weaknesses: Strengths: + Avoids direct MI computation by reformulating the learning process using VAEs and CAN, making optimization more tractable. + Adaptive modality weighting via r-regularization ensures balanced modality contributions, preventing over-reliance on a dominant modality. + Two-phase training strategy (warm-up + main training) reduces gradient conflicts and improves stability in learning task-relevant features. + Benchmark performance is strong, consistently surpassing state-of-the-art methods in multiple tasks. + Scalability is validated, confirming OMIB’s efficiency for large-scale multimodal datasets. Weaknesses: - Important challenges remain, including robustness to missing modalities, generalization to unseen data. See below. Other Comments Or Suggestions: None. Questions For Authors: - The weighting mechanism depends on the KL-divergence ratio, which may still introduce instability in cases where one modality has significantly less information than the other. Can the authors comment on this? - The Cross-Attention Network (CAN) enhances modality fusion by ensuring that complementary information is shared across modalities. While CAN improves fusion, it is unclear if it explicitly enforces diversity. A contrastive learning term could further ensure that different modalities contribute non-redundant information. - In cases of completely missing modalities, CAN alone may not be sufficient without additional mechanisms such as modality imputation or self-supervised learning. The experiments assume all modalities are always available. Real-world multimodal settings often face missing, corrupted, or misaligned modalities (e.g., a failed camera or noisy audio in speech datasets). - The chosen datasets (CREMA-D, CMU-MOSI, tissue detection) are strong benchmarks, but they are not the hardest challenges in multimodal learning. E.g. Evaluation on medical imaging datasets with severe class imbalance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Questions For Authors** **Q1.** This is a good point. To mitigate potential instability arising from extreme KL-divergence ratio ($KL_r$), we adopt several strategies. First, the raw $KL_r$ is not directly used; instead, the weight $r$ is computed as $1-tahn(\cdot)$ and bounded, thus preventing extreme values. Since the $tahn$ function is smooth and saturates, $r$ will not change abruptly in case of significant information imbalance (e.g., very large $KL_r$), thus promoting the training stability. Second, we will add a small constant $\epsilon$ to the denominator in the computation of $KL_r$ to avoid division by $0$ and enhance numeric stability. Third, the batch-averaged KL ratio (i.e. $\frac{1}{N}\sum_{n=1}^N r_n$) is used to compute $r$, thus smoothing out sample-level fluctuations of information ratio. Finally, Proposition 1 guarantees the convergence to the optimal MIB under bounded $r$ and $\beta$, regardless of the information imbalance. Empirical results from our synthetic experiments (SIM-I and -II), where one modality is designed to contain significantly more or less information than the other, further demonstrate this point. **Q2.** Thanks for this insightful comment. While CAN does not include an explicit contrastive loss, our training objective implicitly enforces diversity. As proved in Lemma 1, CAN's training objective function (Equation 17) ensures that both consistent (shared) and diverse (modality-specific) task-relevant contents are captured in the MIB upon convergence. In particular, the redundancy penalty terms ($I(\xi;z_i)$) in the objective function prioritize diverse, informative content as modality‐specific features incur less redundancy compared to shared features. Empirically, our synthetic experiments (Table 2) show that our method significantly surpasses MIB that exclusively contains shared\& task-relevant content, and achieves performance comparable to the authentic optimal MIB. These results confirm the effectiveness of our OMIB framework in accounting for information diversity. We acknowledge that incorporating a contrastive learning term could further promote diversity. However, adding such a term would complicate the theoretical analysis of achieving the optimal MIB and might disrupt the delicate balance between minimizing redundancy and maximizing task relevance established by our current objective. We consider this an intriguing direction for future work, particularly in exploring whether contrastive loss and redundancy penalty are functionally equivalent in enforcing diversity. **Q3.** Thank you for highlighting this practical concern. The primary goal of this work is to establish a rigorous theoretical foundation for achieving optimal MIB under the classical MIB paradigm. To allow tractable analysis, we adopt simplified assumptions, including the availability of all modalities, which admittedly do not fully capture real-world complexities. However, these assumptions align with the spirit of Ali Rahimi’s NIPS 2017 Test of Time Award remark—*“Simple experiments, simple theorems are the building blocks that help us understand more complicated systems”*. Your suggestion of incorporating additional mechanisms for handling missing modalities is valuable. One promising approach would be to use modality-complete data to train an auxiliary VAE that maps one modality's observations to the variational parameters of the other. In cases of modality-incomplete data, the available modality could then approximate the variational representations of the missing one via the reparameterization trick prior to CAN-mediated fusion. However, integrating this component would alter the framework's architecture and training objectives and compromise its theoretical guarantees, deviating from our study's original focus. Rather, these extensions constitute our *de novo* future work with a more practical orientation. **Q4.** We conducted an additional experiment using the MM-IMDb dataset—a challenging text-visual dataset with severe class imbalance. Specifically, it consists of 25,959 sample pairs across 23 movie genres, with the largest class containing 13,967 samples and the smallest 338, representing a 41-fold imbalance. A stratified split was applied to form training (60%), validation (10%), and testing (30%) sets. We evaluated our method and nine benchmark methods using the macro F1-score (see Table below). The results demonstrate that our method outperforms the benchmarks in this more challenging setting. |Methods|Concat|BiGated|MISA|deep IB|MMIB-Cui|MMIB-Zhang|E-MIB|L-MIB|C-MIB|OMIB| |-|-|-|-|-|-|-|-|-|-|-| |F1-score|0.218|0.309|0.334|0.374|0.353|0.373|0.303|0.377|0.357|0.409|
null
null
null
null
null
null
null
null
UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction
Accept (poster)
Summary: The paper introduces UI-Vision, a large-scale, desktop-centric benchmark for evaluating Graphical User Interface (GUI) agents in visual perception and interaction. Unlike existing benchmarks that focus on structured web and mobile interfaces, UI-Vision targets desktop environments, which lack standardized automation APIs and require direct screen interpretation. The benchmark spans 83 open-source applications, providing 6,484 tasks with dense human-annotated data across three key tasks: Element Grounding, Layout Grounding, and Action Prediction. Evaluation of leading Vision-Language Models (VLMs) and GUI agents highlights major limitations, such as poor spatial reasoning (best model achieves only 18.0% accuracy in element grounding) and difficulty in action execution, particularly dragging actions. However, combining LLMs with GUI-grounding models significantly improves action recall (2×–5× gains). The findings emphasize the need for better multimodal training, stronger screen parsing, and enhanced planner-grounding integration for AI-driven GUI interaction. To advance research, UI-Vision is fully open-source, aiming to set a new standard for desktop GUI automation. Claims And Evidence: 1. The paper presents quantitative results showing that state-of-the-art models (GPT-4o, Gemini, etc.) achieve low accuracy on grounding and action tasks. However, the specific reasons for failure (e.g., lack of spatial reasoning, reliance on textual cues) are asserted rather than deeply analyzed. Additional qualitative error analyses or ablation studies could better support this claim. In addition, some new models including Qwen2.5-VL, InternVl-2.5, Cluade-3.7, etc, should be evaluated. 2. While the dataset covers a broad range of applications, it focuses primarily on open-source software, which may not fully capture the complexities of proprietary desktop environments (e.g., Windows/MacOS enterprise applications). The assumption that findings generalize across all GUI platforms could be problematic. 3. The experiments show a 2×–5× improvement in recall when LLMs are paired with grounding models. However, it is unclear whether this improvement is due to better action selection or simply improved recall on basic UI elements. Further analysis of failure cases and different LLM planner strategies would clarify the robustness of this conclusion. Methods And Evaluation Criteria: 1. While UI-Vision is license-permissive, it excludes proprietary applications (e.g., Microsoft Office, Adobe Photoshop, enterprise software). Desktop automation is often used in closed-source environments, so the benchmark might not fully reflect real-world constraints where API restrictions and security settings affect interactions. A more balanced dataset, incorporating at least some closed-source applications, would improve applicability. 2. The Action Prediction task assumes step-by-step task execution, but real GUI agents often require long-term reasoning (e.g., navigating through multiple menus before completing a task). Introducing multi-step action planning evaluation would be beneficial, as current metrics mostly assess immediate action correctness rather than goal completion efficiency. 3. UI-Vision is focused exclusively on desktop applications, whereas many GUI automation challenges also involve cross-platform scenarios (e.g., web-based applications embedded in desktop environments). Incorporating some hybrid environments (e.g., web-based GUIs within desktop software) would better reflect modern GUI automation challenges. Theoretical Claims: This work focuses on introducing a new benchmark for evaluating Graphical User Interface (GUI) agents, specifically designed for desktop environments. The paper discusses the creation of the UI-Vision benchmark, its tasks, and the evaluation of various models on this benchmark. Experimental Designs Or Analyses: 1. The data collection process involved human annotators, which could introduce variability. However, the multi-stage quality checks and periodic verification by separate annotators and authors aim to mitigate this. 2. The results indicate that even state-of-the-art models struggle with spatial grounding and precise action execution. This highlights the need for further research and development in this area. Supplementary Material: I have reviewed all of supplementary material. Relation To Broader Scientific Literature: The paper's evaluation reveals specific limitations in current models' spatial understanding and action execution capabilities. It highlights critical areas for future research that align with growing recognition in the literature of the need for improved visual-spatial reasoning in multimodal agents. In summary, UI-Vision advances the field by addressing documented limitations in existing benchmarks while building on recent progress in multimodal AI and GUI automation research. Its comprehensive approach provides a new standard for evaluating desktop GUI agents, filling important gaps identified in prior literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: I hope authors provide anonymous repo containing full benchmarsk and eval scripts. This is extremely important for reviewers to valid the quality of the proposed benchmark. Other Comments Or Suggestions: 1. In table 3, based on the performance of open-source VLMS, why InternVL2-8b achieves such low performance in terms of basic setting? 2. Can authors provide more failure case analysises for VLMs? It is important to understand the significance of the proposed benchmark. Questions For Authors: None. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Privacy and Security'] Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **R1: Failures cases** Claude 3.7 Sonnet was released on Feb 24, 24 days after the deadline, and Qwen 2.5-VL on Jan 28, just two days prior—making it infeasible to include them in the review version. However, we have now evaluated both models on our grounding benchmarks, and results are available at <[link](https://bit.ly/4iLiu3j)>. We summarize key error patterns below and confirm that they hold for the above models and refer the reviewer to our responses to Reviewer 3NZB (R3) and Reviewer 7jPV (R1) for more details and examples: **Element Grounding:** Models struggle with visually similar elements, platform-specific icons, and small elements in dense UIs. GUI agents perform better on functional tasks due to training alignment, while VLMs underperform, revealing task-specific gaps. **Layout Grounding:** Frequent issues include overly large or loosely defined bounding boxes and failure to group related UI elements correctly. **Action Prediction:** Models often fail to ground actions accurately, hallucinate elements/actions, and perform poorly on dense, complex interfaces. **R2: Concern about generalizing findings from open-source to proprietary desktop environments** We appreciate the reviewer’s concern. However, many open-source apps closely emulate proprietary counterparts, capturing similar complexity. For instance, LibreOffice mirrors Microsoft Office features and serves over 200 million users[1]. Given this overlap in functionality and interface design, we believe our benchmark remains meaningful and representative of real-world GUI scenarios. [1]Wikipedia page for LibreOffice. https://en.wikipedia.org/wiki/LibreOffice **R3: Limited real-world applicability due to the exclusion of closed-source applications** We agree that automation is widely used in proprietary desktop environments; however, including such applications poses legal and licensing challenges, as many restrict redistribution of software interfaces, recordings, or interaction data, making it difficult to build a publicly shareable dataset around them. To ensure accessibility and reproducibility, we focus on open-source software. More importantly, our benchmark includes widely-used open-source applications like LibreOffice, VSCode, GIMP, Firefox, Brave, and Shotcut, which serve as strong counterparts to commercial software. These applications support complex, real-world workflows, share high functional similarity with proprietary software and are backed by active communities. We believe this focus ensures UI-Vision remains both practical and widely applicable. **R4: Planner + VLM grounding analysis** We clarify that the planner selects the action and target element, while the grounding model only provides coordinates. Thus, the improvement is primarily due to better action selection. To analyze this further, we compared the decrease in error rates across platform categories (Fig. <[link](https://bit.ly/3RimGMd)>) in this setup. Productivity tools show the largest improvement (26%), even though Entertainment tools have the highest baseline grounding accuracy—highlighting the planner’s effectiveness. In contrast, Creativity platforms show the smallest improvement (14%), reflecting the challenges of planning and grounding in functionally rich interfaces with small UI elements. **R5: Lack of multi-step planning evaluation** We agree that long-term planning is important, but current models still struggle with single-step actions (nearly 0% recall on drag and only 20% on click actions). Strengthening performance on these core tasks is a necessary first step, with multi-step planning as a valuable direction for future work. **R6: Missing support for hybrid or cross-platform GUI environments** We agree that hybrid environments are a valuable direction for future work. However, our current benchmark already presents significant challenges for existing models showing that even desktop settings require substantial progress. We believe UI-Vision offers ample opportunities to advance model capabilities. **R7: Anonymous repo with full benchmark** We apologize for the inconvenience but as per ICML policy, we are only permitted to share links to figures and tables. However, to give reviewers a clearer sense of the benchmark, we have included detailed failure cases in our response to Reviewer 3NZB (R3) and provided additional task examples at <[link1](https://bit.ly/4clZMx6)> and <[link2](https://bit.ly/3G06lsO)> (5s loading). All code, data, and evaluation scripts will be soon released. **R8: Low performance of InternVL-8B** The main reason is that InternVL2-8B fails to consistently generate meaningful bounding boxes, often producing arbitrary outputs like [0, 0, 50, 50] or [0, 0, 200, 200]. This suggests limited training on grounding tasks, particularly for small UI elements. Its poor performance is also reflected in the ScreenSpot benchmark [1] (Table 2). [1] Wu, et al. OS-ATLAS: A foundation action model for generalist GUI agents.
Summary: This paper introduces UI-Vision, a benchmark for evaluating AI agents’ ability to interact with desktop Graphical User Interfaces (GUIs). Unlike existing benchmarks that focus on web or mobile environments, UI-Vision is designed specifically for desktop platforms and is claimed to be the largest of its kind. It includes 6,484 tasks across 83 software applications, spanning categories such as productivity, development, creativity, education, browsers, and entertainment. The benchmark assesses models on three key tasks: element grounding, layout grounding, and action prediction. The study evaluates multiple state-of-the-art models, including GPT-4o, Gemini, Claude, and various open-source alternatives, highlighting their limitations in handling complex desktop interactions. Claims And Evidence: The paper claims that UI-Vision is the largest and most diverse benchmark for evaluating desktop GUI agents. This is well-supported by the dataset size (6,484 tasks, 83 applications) and comparisons with existing benchmarks. Claims about performance gaps in SOTA models are backed by experiments. Methods And Evaluation Criteria: The evaluation criteria align well with the problem: GUI automation requires visual perception and interaction, and the paper assesses models across grounding, layout recognition, and action prediction. The dataset is large and diverse, covering real-world applications, which enhances credibility. However, the evaluation could be improved by including human performance baselines to contextualize model failures. The choice of IoU, recall@d, and accuracy metrics makes sense, though more fine-grained analysis (e.g., error types in action prediction) would be useful. Theoretical Claims: The paper does not include formal theoretical claims or proofs. The methodology is empirical, focusing on dataset construction, benchmarking, and performance analysis. No verification of proofs is required. Experimental Designs Or Analyses: The experimental setup is generally sound, using diverse state-of-the-art models (GPT-4o, Gemini, Claude, and open-source alternatives). The dataset is well-documented, and comparisons with existing benchmarks are thorough. However, there are limitations in the experimental design: 1) The effect of dataset bias, e.g., open-source software selection, is not discussed; 2) No cross-software generalization analysis, e.g., how well models trained on some applications transfer to unseen ones; 3) The study lacks enough error analysis, e.g., why do models fail at certain tasks? Supplementary Material: Yes, including the dataset creation, data statistics and examples. Relation To Broader Scientific Literature: The paper correctly situates UI-Vision within the broader field of GUI automation, multimodal learning, and AI-driven interaction systems. It builds on prior GUI benchmarks like MiniWoB++, WebArena, and OmniAct but extends focus to desktop environments. It also connects with multimodal learning research, citing relevant vision-language models. Essential References Not Discussed: N/A, no obvious omissions were identified from closely related literature. Other Strengths And Weaknesses: Strengths: • The first large-scale benchmark specifically designed for desktop GUI automation, addressing a gap in existing benchmarks that focus on web and mobile platforms. • Evaluates models on element grounding, layout recognition, and action prediction, providing a multi-faceted assessment of GUI interaction. • The dataset spans 83 diverse applications across various categories (e.g., productivity, development, creativity), enhancing its applicability to real-world use cases. • Benchmarks state-of-the-art multimodal models, revealing critical performance gaps in GUI perception and interaction. Weaknesses: • Dataset bias: The dataset focuses exclusively on open-source software, limiting its applicability to commercial tools. • Limited analysis of model failures: While performance limitations are discussed, there is no in-depth analysis of specific error patterns (e.g., types of misclicks, confusion between similar UI elements, or issues in handling dynamic interfaces). • The benchmark focuses on accuracy but does not assess inference speed, latency, or token efficiency for real-world deployment. • While the paper evaluates multiple models on UI-Vision, it does not include ablation studies to analyze which task components contribute most to difficulty. Similarly, there is no systematic analysis of how models generalize across different software categories, such as productivity vs. creative tools. Such studies could provide deeper insights into model weaknesses and potential benchmark improvements. Other Comments Or Suggestions: I look forward to the open-source release of the dataset and resources, which will allow for a more comprehensive review of the benchmark’s reproducibility, cross-benchmark comparisons, and dataset extensibility. Questions For Authors: 1. If models were previously trained on applications similar to those in UI-Vision, how do you ensure the benchmark fairly evaluates generalization rather than memorization? 2. Your experiments reveal significant performance drops in spatial element grounding (best model: 18%) and drag actions (near-zero recall). Have you analyzed why these tasks are particularly difficult? 3. Given that the best-performing model (Gemini-1.5-Pro) achieves only 30.8 IoU in layout grounding, what are the most common failure cases? Do models struggle more with complex, nested UI layouts (e.g., tabbed interfaces) or dense interfaces with many overlapping elements? 4. Do you anticipate domain shift issues between open-source and closed-source UI designs? 5. Do you envision automated UI annotation tools or self-supervised learning techniques to reduce reliance on human annotators? What challenges do you foresee in ensuring annotation consistency at scale? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and address them below. **R1: Fine-grained error analysis, ablations on task difficulty, and generalization across software categories** To deepen our understanding of model performance, we conducted an error analysis through a human study for element grounding and found that models often struggle with small UI elements and platforms with high functional density. To investigate further, we leveraged the dense bounding box annotations in our dataset and sampled a diverse subset of elements that were consistently challenging for top-performing models such as UI-TARS, UGround-v1, and Aria UI. We selected cases where one or more models failed, resulting in a subset of 5479 samples spanning basic, functional, and spatial categories. We report detailed results across software and categories in Table in <[link](https://bit.ly/4iLiu3j)>, and summarize key findings below: **Error Types:** Models often confuse visually similar elements, miss platform-specific icons, and fail to detect small elements in dense layouts. **Task Difficulty:** GUI agents perform better on functional grounding than basic grounding, likely due to alignment with their training data. In contrast, VLMs underperform functional tasks, revealing task-specific weaknesses. **Category Generalization:** Performance varies significantly across software categories. Creativity tools like Blender and GIMP (112 elements/frame, 418 px/element) show the lowest accuracy, while simpler platforms like VLC (63 elements/frame, 875 px/element) perform best. Notably, screenshot resolution had little impact. We refer the reviewer to our response to Reviewer 3NZB (R3) for qualitative study and analysis on **Layout Grounding** and **Action Prediction** and more details and examples on **Element Grounding**. **R2: cross-software generalization analysis** We compare model performance on common apps (e.g., VSCode) vs. less common ones (e.g., FreeCAD, QGIS) using the element grounding subset in R1. results are included in Table in <[link](https://bit.ly/4iLEnQ5)>. While we cannot confirm exact training data used in several models, this serves as a proxy for generalization analysis. We observe all models show significant accuracy drops on less common apps, confirming consistent generalization challenges. **R3: Concern about dataset bias due to focus on open-source software** Many open-source applications closely emulate proprietary counterparts, capturing similar complexity. For instance, LibreOffice mirrors Microsoft Office features and serves over 200 million users[1]. Given this overlap in functionality and interface design, we believe our benchmark remains meaningful and representative of real-world GUI scenarios. We refer the reviewer to our response to Reviewer aEXy (R3) for more details on our choice to focus on open-source software [1]Wikipedia page for LibreOffice. https://en.wikipedia.org/wiki/LibreOffice **R4: Evaluation of inference speed, latency and token efficiency** We perform a detailed analysis of the inference speed, token efficiency and latency for different models and different benchmark tasks and report the numbers in <[link](https://bit.ly/4jeHdNC)>. **R5: Ensuring generalisation vs memorisation** Since we do not have access to the training data recipe of the most of models, we are not able to carry out a comprehensive study on this point. However, the failure case of Element Grounding in Fig. 12 in <[link](https://bit.ly/43xRn7f)> indicates merely memorization can not ensure accurate grounding. Models need good generalization ability to excel on the task. **R6: Analysis of spatial grounding and drag actions** In the spatial setting, models must first correctly identify the reference element and then reason about its spatial relation—both steps are required for success. Also, VLMs are known to struggle with spatial reasoning, limiting performance. For drag actions, models are rarely trained on such interactions in web data, making them difficult to execute. Also, success depends on accurately predicting both start and end points, increasing the chance of error. **R7: Failure cases in layout grounding** As shown in Fig. 13(a) and 14(a) in <[link](https://bit.ly/4iNTBnL)>, Gemini-1.5-Pro often fails to return a minimal bounding box for the ground truth region, although the correct region is usually contained within the predicted box. **R8: Domain shift between open and closed software** Yes, we do. However, open-source systems are built to provide functionalities similar to those of closed-source counterparts, so evaluating model capabilities in these scenarios will directly correlate with those of closed-source ones. **R9: Potential for automated annotation** Yes, we do. We have applied LLMs in the annotation of layout grounding in Sec 3.2. However, the major challenge is that UI tasks are quite fine-grained, so it will be hard to control the quality during automatic annotation at scale.
Summary: The authors introduce UI-Vision, a comprehensive desktop GUI benchmark with 83 open-source applications, focusing on three tasks: Element Grounding, Layout Grounding, and Action Prediction. Built from human demonstrations and expert annotations, it evaluates GUI agents’ visual perception and interaction capabilities. Tests on top VLMs reveal poor spatial grounding (e.g., 18% accuracy) and action execution (e.g., 4.4% recall on clicks), highlighting gaps in desktop GUI automation. Claims And Evidence: - UI-Vision’s focus on desktop GUIs fills a critical gap, with its diverse tasks (e.g., layout grounding) offering a fresh approach beyond web/mobile benchmarks. - Fig. 1 shows a task example but lacks failure cases to illustrate model struggles. - Sec. 3.2: Layout grounding generation via LLAMA-3.3-70B is mentioned, but validation process details are missing. Methods And Evaluation Criteria: - The open-source, densely annotated dataset (83 apps, 2,072 tasks) with real-world complexity is a valuable resource for GUI agent research. - Table 5 lacks latency metrics, vital for practical GUI automation, despite extensive action evaluation. - Sec. 4.2: Action metric definitions (e.g., Recall@d) lack specific $d$ values, muddying interpretation. Theoretical Claims: N/A Experimental Designs Or Analyses: - Evaluations (Tables 3-5) expose clear model weaknesses (e.g., spatial grounding at 18%, drag action struggles), providing actionable insights for future development. - No ablations test the impact of annotation density (e.g., 71 boxes/frame) or task design (e.g., spatial vs. functional grounding). How do these affect performance? - Sec. 3.1: How are "expert annotators" qualified beyond degrees? Training details are vague, affecting reproducibility. Supplementary Material: N/A Relation To Broader Scientific Literature: Good to discuss recent work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - I understand that this is a Desktop-centric GUI Benchmark, but now many benchmarks or datasets [1-4] have already covered these, and I think the unique contribution of this work is a bit limited. [1] Xu Y, Wang Z, Wang J, et al. Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction[J]. arXiv preprint arXiv:2412.04454, 2024. [2] Xie T, Zhang D, Chen J, et al. Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments[J]. Advances in Neural Information Processing Systems, 2024, 37: 52040-52094. [3] Zheng B, Gou B, Kil J, et al. GPT-4V (ision) is a Generalist Web Agent, if Grounded[C]//International Conference on Machine Learning. PMLR, 2024: 61349-61385. [4] Koh J Y, Lo R, Jang L, et al. VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024: 881-905. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the value of our desktop-focused benchmark, task diversity beyond web/mobile settings, dense annotations, and actionable insights. We address the concerns below. **R1: Failures cases** We summarize key error patterns below and refer the reviewer to our response to Reviewer 3NZB (R3) for more details and examples. - **Element Grounding:** Models often confuse visually similar elements, fail to recognize platform-specific icons and struggle with small elements in dense layouts. - **Layout Grounding:** Common issues include bounding boxes that are too large or loosely defined and failure to group related UI elements correctly - **Action Prediction:** Models frequently fail to ground actions correctly, hallucinate UI elements, and perform poorly on complex interfaces with many interactive elements. **R2: LLAMA-3.3-70B layout grounding validation process** The authors conducted a manual validation process to ensure quality using three criteria: **(i)** bounding boxes must tightly enclose relevant UI elements without including unrelated regions; **(ii)** grouped elements must form a semantically meaningful and visually coherent unit; and **(iii)** labels and descriptions must accurately reflect the group's function. Groups failing any check were discarded. The authors had a detailed protocol and access to the platform and its documentation to ensure consistency. We will include these validation details in the camera-ready version. **R3: Latency metrics** We report latency per query, average output tokens, and GPU usage across all three tasks in Tables at <[link](https://bit.ly/4jeHdNC)>, using default Hugging Face implementations for consistency. Token efficiency was measured with GPT-4 tokenization. Models like UI-TARS, trained on action-heavy tasks, generate longer outputs due to detailed step-by-step reasoning. **R4: *Recall@d metric and value selection*** We choose the *d* values based on the average bounding box size across the dataset after resizing all screenshots to a standard resolution (800×700) yielding a base value of (25, 35). This base value is then rescaled based on the original resolution of each sample to ensure consistent evaluation across varying interface sizes. **R5: Ablation on annotation density and task design impact** We perform a detailed ablation analysis (Fig. 18 at <[link](https://bit.ly/4c8La3M)>) to understand factors affecting performance on 3 tasks. Across all three tasks, we find that densely packed applications with smaller UI elements like GIMP (112 elements/frame, area of 418 px/element) show lower performance compared to entertainment platforms with simpler layouts like VLC (63 elements/frame, area of 875 px/element). Regarding task design, GUI agents perform comparably or better on functional grounding than basic grounding, likely due to alignment with their training data. In contrast, both open- and closed-source VLMs perform worse on functional tasks. Spatial grounding is the most challenging, as it requires identifying the correct element and reasoning about its relative position—an area where VLMs generally struggle due to limited spatial reasoning. For a more comprehensive analysis with detailed settings, we refer the reviewer to our detailed response to Reviewer 7jPV (R1). **R5: annotator qualifications and training process** Beyond academic background, annotators were selected through technical assessments, language tests, and task-specific bootcamps. Those who didn’t meet the criteria were excluded. We used a detailed annotation protocol refined during a month-long pilot with feedback to ensure consistency (L-676–677). Quality was further ensured through manual reviews and ongoing performance monitoring. We will clarify these details in the camera-ready version. **R6: Clarification on UI-Vision’s unique contribution vs. existing benchmarks** We appreciate the reviewer’s concern and would like to clarify the distinct contribution of our work. While recent efforts [1–4] have advanced GUI agent evaluation, they primarily focus on web environments or online interaction. Specifically, [1] includes limited desktop data and annotations (OmniAct: 5412 training samples across 38 platforms only for action prediction). [3] and [4] focus mostly on web-based tasks. While [2] (OSWorld) targets desktop platforms, it evaluates across a limited number of platforms in an online setting using broad task completion metrics. In contrast, our benchmark supports offline evaluation across 450 real-world desktop tasks spanning 83 applications. It provides a complete pipeline of three benchmark tasks, along with detailed evaluation metrics. This setup allows models to be assessed from basic perception to planning and execution, all within a single structured benchmark. Unlike existing works, UI-Vision offers fine-grained insights into where and how models fail, making it a valuable tool for diagnosing and improving GUI agents.
Summary: This paper introduces a desktop GUI benchmark (i.e., UI-Vision) that spans 83 real-world environments with open-source and permissive data. It enables three key tasks evaluation, including element grounding, layout grounding, and action prediction. The evaluation reveals the limitations of existing works to handle desktop environments. ## update after rebuttal I appreciate the authors' clarifications. My main concerns have been addressed by the rebuttal. I would lean to accept the paper by involving the additional discussions in the revised version. Claims And Evidence: 1. Claiming that the proposed benchmark is the largest desktop-centric benchmark is a little bit unconvincing. As shown in Table 1, the proposed benchmark contains 6484 samples, while OmniAct (Kapoor et al., 2024) contains 9802 samples. 2. L209 states that the final dataset consists of 442 high-quality demonstrations across 83 applications. But L055 (right) states that the proposed method contains 450 recorded videos spanning 83 platforms. Methods And Evaluation Criteria: The proposed benchmark dataset is useful for the comprehensive evaluation of autonomous GUI agents, covering essential agent capabilities. Theoretical Claims: Yes. Experimental Designs Or Analyses: It would be better to add qualitative evaluations to the paper. Supplementary Material: Yes. Relation To Broader Scientific Literature: The key contribution of this paper is the proposed desktop-centric GUI benchmark. It is useful for comprehensive element grounding, layout grounding, and action navigation tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is well-organized with a clear structure. The proposed benchmark dataset could be useful for future research in the community. However, this benchmark still belongs to offline and static scenarios. Other Comments Or Suggestions: The number of videos is inconsistent in the paper. Questions For Authors: How to ensure the consistency of human annotators during the dataset creation process? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the value of our desktop-centric benchmark, its utility for comprehensive analysis of GUI agents, and the clarity of the paper. Below, we address the concerns raised. **R1: Clarification on “largest desktop-centric benchmark” claim** We appreciate the reviewer’s point and agree that “largest” can be interpreted in different ways. While OmniAct includes 9802 samples (7639 desktop-related), it focuses solely on action prediction. In contrast, our benchmark includes 6484 samples across 83 applications and supports three tasks—action prediction, layout and element grounding—with dense frame-level UI annotations. Considering the range of platforms, tasks, and annotation details, we believe UI-Vision is the most comprehensive desktop GUI benchmark, and we are happy to revise the claim for clarity. **R2: Clarification on 442 vs. 450 video count inconsistency** To clarify, our dataset contains 450 densely annotated videos across 83 applications. However, for the action prediction task, we use 442 videos, excluding 8 that involved complex actions (e.g., press and hold Ctrl + drag + release), which most models cannot yet handle reliably making evaluations difficult. This is noted in the Limitations section (L795–797), and we will ensure both this and the earlier typo are clarified in the camera-ready version. **R3: Qualitative Evaluation** Below we highlight qualitative error analysis for all three benchmark tasks: **Element Grounding:** Figures are available at [](https://bit.ly/43xRn7f)<[link](https://bit.ly/43xRn7f)>. Observations are on SOTA UI-TARS model >**Fine-grained ambiguity:** The model fails to recognize the correct target among several visually similar candidates (Fig. 9), highlighting the need for improved disambiguation strategies. >**Lack of domain knowledge:** The model misinterprets platform-specific elements, such as “Fontwork” represented by “F” symbol (Fig. 10). Incorporating external knowledge could help improve performance. >**Small element detection:** The model struggles with small UI elements, particularly in high-resolution or dense interfaces (Fig. 11). Iterative zoom-in strategies may address this limitation. >**Cross-platform generalization:** The model incorrectly transfers layout assumptions across platforms—eg., predicting the minimize button's position on macOS as it would appear on Windows (Fig. 12). This suggests memorization and overfitting. **Layout Grounding:** Example cases below are available at <[link](https://bit.ly/4iNTBnL)> >**Inaccurate bounding box placement:** Closed-source models often predict bounding boxes that loosely cover the correct region without precisely matching its boundaries (Fig. 13a, 14a). This suggests difficulty in precise layout partitioning >**Poor functional grouping:** Open-source models sometimes fail to group elements correctly, even when the query explicitly mentions them (Fig. 13b). >**Superficial semantic matching:** Models sometimes default to grounding smaller elements that share surface-level keywords with the query but are semantically unrelated (e.g., predicting a “Design” button for a different design-related query, Fig. 14b) **Action Prediction:** Example cases below are available at <[link](https://bit.ly/42gIBID)> >**Poor grounding:** Models often predict the correct action type but fail to ground it to the appropriate UI element (Fig. 15). This reflects challenges in bridging perception and execution. >**Lack of platform knowledge:** We observe that models sometimes hallucinate actions (eg., refer to non-existent elements) or misinterpret platform-specific elements (Figs. 16, 17), likely due to limited training exposure to diverse desktop environments. >**High interface complexity:** Dense and feature-rich platforms pose greater challenges for accurate action prediction. UI-TARS exhibits the highest error rate (85%) on creativity platforms (112 elements/frame) while performing better (72% error) on simpler education platforms (62 elements/frame). **R4: Clarification on offline/static benchmark setting** Our benchmark is offline and static by design. The controlled setup allows for a fine-grained evaluation of how perception and grounding errors affect downstream actions. By structuring tasks from perception to action prediction, UI-Vision helps isolate failure modes and provides insights crucial for building more robust agents before deployment in dynamic environments. **R5: Annotator consistency during dataset creation** We partnered with a for-profit company that provided experienced annotators (L-604-605). All annotators underwent training on the software and were required to pass several assessments related to the tasks to proceed. Those who failed were excluded. Additionally, we followed a detailed annotation protocol, which was refined during a month-long pilot phase where annotators received detailed feedback to ensure high-quality and consistent data (L-676-677).
null
null
null
null
null
null
TimeStacker: A Novel Framework with Multilevel Observation for Capturing Nonstationary Patterns in Time Series Forecasting
Accept (poster)
Summary: The paper introduces TimeStacker, a new framework designed to enhance time series forecasting by effectively capturing nonstationary patterns. The core innovation lies in its stacking mechanism, which sequentially aggregates patches of varying sizes to balance global and local signal representations. Additionally, the framework employs a frequency-based self-attention module that improves feature modeling by computing similarity in the frequency domain while aggregating in the time domain. Experimental results across multiple real-world datasets (energy, finance, weather) demonstrate that TimeStacker achieves state-of-the-art performance, surpassing existing models in both predictive accuracy and computational efficiency while using fewer parameters. Claims And Evidence: The claims made in the submission are largely supported, through experiments on multiple datasets. The results consistently demonstrate that TimeStacker outperforms state-of-the-art models in predictive accuracy while maintaining computational efficiency. The inclusion of ablation studies further strengthens the validity of the proposed frequency-based self-attention mechanism. But, one potential limitation is the claim that TimeStacker effectively handles multivariate time series. While the model performs well on most datasets, the authors acknowledge a decline in performance as the number of variables increases, suggesting a possible bottleneck. Additionally, while the theoretical justification of the stacking mechanism is well-founded, more in-depth comparisons with alternative approaches for handling nonstationary signals could strengthen the argument. Overall, the empirical results are compelling, but further validation on larger and more complex datasets would reinforce the generalizability of the proposed method. Methods And Evaluation Criteria: The methods and evaluation criteria align well with the problem of time series forecasting. TimeStacker introduces a novel stacking mechanism and frequency-based self-attention, both of which are well-motivated by the challenges of nonstationary signals. The choice of benchmark datasets ensures a comprehensive evaluation across diverse real-world applications. The use of Mean Squared Error (MSE) and Mean Absolute Error (MAE) as performance metrics is standard in time series forecasting and appropriately assesses both the accuracy and robustness of predictions. Ablation studies and comparisons with recent state-of-the-art models provide further validation. While the evaluation framework is well-structured, the model's performance on highly multivariate datasets could have been explored further to assess scalability. Theoretical Claims: The authors reference the time-frequency uncertainty principle to justify their multi-scale stacking approach, which is a well-established concept in signal processing. The mathematical formulation of the FreqAttention module, including the use of Fourier transforms and Hadamard products for computing similarity, appears logically sound and aligns with existing principles in time-series analysis. Experimental Designs Or Analyses: The evaluation framework is well-structured, with comparisons against multiple state-of-the-art models across diverse benchmark datasets, ensuring assessment of TimeStacker’s performance. The use of standard forecasting metrics, MSE and MAE, that supports the reliability of the results. Additionally, the ablation studies provide insights into the contribution of the frequency-based self-attention module. But, a deeper analysis of computational complexity trade-offs compared to baseline models would strengthen the argument. Overall, the experimental design is good, but more investigation into scalability and complexity would enhance the paper. Supplementary Material: No supplementary material provided Relation To Broader Scientific Literature: The proposed TimeStacker framework is related to prior work in deep learning-based forecasting models, such as MLP-based approaches (DLinear, TimeMixer) and Transformer-based models (PatchTST, Crossformer). The contribution connects with broader trends in time-frequency analysis and multi-scale modeling, which have been explored in statistical and signal processing literature. By integrating these concepts into a computationally efficient deep learning framework, the paper advances the field by offering a scalable and interpretable solution for nonstationary time series forecasting. Essential References Not Discussed: No major reference missing Other Strengths And Weaknesses: No additional points Other Comments Or Suggestions: Table 1, could benefit from having a average across datasets to see overall improvement compared. Questions For Authors: No additional questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for your appreciation and for the valuable suggestions. **Cross-Dataset Performance:** For the ETT series data, we present the average performance across datasets (the first row indicates MSE and the second row indicates MAE) as shown below: | TimeStaker | SOFTS | SparseTSF | iTransformer | TimeMixer | SAMformer | PatchTST | Crossformer | DLinear | RLinear | | ---------- | ----- | --------- | ------------ | --------- | --------- | -------- | ----------- | ------- | ------- | | 0.364 | 0.376 | 0.396 | 0.383 | 0.367 | 0.382 | 0.381 | 0.685 | 0.442 | 0.380 | | 0.378 | 0.394 | 0.398 | 0.399 | 0.388 | 0.392 | 0.397 | 0.578 | 0.444 | 0.392 | **Complexity Analysis:** We have conducted an in-depth analysis on how increasing the input sequence length affects memory consumption and training time, with the results averaged over three runs. Partial results are shown in the table below (GPU Memory(MB)/Training Time(ms/iter): | Input Length | TimeStaker | SparseTSF | PatchTST | Crossformer | | ------------ | ---------- | --------- | ---------- | ----------- | | 192 | 28.8/134 | 15.6/108 | 145.8/87.8 | 5214/238 | | 384 | 29.3/133 | 16.1/112 | 334.7/90.1 | 5734/273 | | 768 | 34.3/137 | 20.6/115 | 830.0/93.3 | 6814/342 | | 1536 | 59.2/137 | 28.7/127 | 2403.6/137 | 9016/1007 | | 3072 | 110.7/134 | 44.1/131 | 7832/1315 | 12470/3121 | We will provide clearer visualizations and more comprehensive data in the appendix. **Regarding Multivariate Data:** Our work primarily focuses on time series modeling and does not include specialized designs for multivariate data. Integrating information from multiple variables is a complex task that we plan to address in our future work. Once again, thank you for your valuable feedback. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions and taking the feedback into consideration. I would like to keep my original score.
Summary: The paper "TimeStacker: A Novel Framework with Multilevel Observation for Capturing Nonstationary Patterns in Time Series Forecasting" introduces TimeStacker, a forecasting framework that addresses the challenges of nonstationary time series by integrating multi-resolution stacking and frequency-based self-attention. By sequentially aggregating patches of varying sizes, TimeStacker captures both global trends and local variations, while its frequency-based attention module enhances feature extraction by computing similarity in the frequency domain. Grounded in time-frequency analysis and the uncertainty principle, the model outperforms state-of-the-art forecasting methods across diverse real-world datasets, achieving superior accuracy with lower computational complexity. ## After rebuttal I will maintain my score Claims And Evidence: The paper asserts the necessity of analyzing time series in the frequency domain and adopting a multi-resolution perspective, claims that are well-supported by **Definition 3.1 and Theorem 3.2**, which theoretically establish the importance of capturing time-frequency variations in nonstationary signals. Furthermore, the experimental results empirically validate these claims by demonstrating TimeStacker’s superior performance across multiple real-world datasets. However, while the proposed approach effectively addresses the stated challenges, it bears resemblance to existing methodologies that employ similar multi-resolution strategies, which somewhat limits its novelty. Methods And Evaluation Criteria: The proposed method adopts a multi-resolution approach to analyzing time series in the frequency domain, which is a well-motivated strategy for handling nonstationary signals. However, to strengthen the contribution, a clearer distinction between TimeStacker and existing models such as N-HiTS and TimeMixer—which also leverage multi-resolution techniques—would be beneficial. Regarding the evaluation criteria, the paper employs widely accepted metrics (e.g., MSE, MAE) and benchmark datasets commonly used in time-series forecasting research, ensuring a fair and standardized comparison against existing methods. Theoretical Claims: I have examined **Definition 3.1 and Theorem 3.2**, both of which serve as the theoretical foundation for TimeStacker’s motivation. **Definition 3.1** effectively formulates the need for frequency-domain analysis by representing nonstationary time series as time-varying Fourier components, while **Theorem 3.2**, derived from the time-frequency uncertainty principle, justifies the necessity of a multi-resolution approach. These theoretical claims are mathematically sound and align with well-established principles in signal processing. Additionally, they provide a clear rationale for the model’s design choices. I did not identify any fundamental issues with these proofs, but a deeper comparison with alternative formulations could further reinforce their validity. Experimental Designs Or Analyses: The experimental design presented in the paper is generally well-structured and valid, employing widely recognized benchmarks and evaluation metrics commonly used in time-series forecasting. The results effectively demonstrate the advantages of the proposed method. However, to further strengthen the empirical analysis, it would be beneficial to include an ablation study on the multi-resolution stacking steps, providing insights into how each resolution level contributes to the final performance. Additionally, a direct performance comparison with N-HiTS, which also utilizes a hierarchical decomposition strategy, would help clarify TimeStacker’s relative advantages and better position it within the landscape of multi-resolution forecasting models. Supplementary Material: The paper does not provide code or implementation details, which limits the ability to fully verify the reproducibility of the results. However, I have reviewed the supplementary material in the appendix, including additional experimental results and visualizations, which further support the claims made in the main paper. These supplementary analyses help illustrate the effectiveness of TimeStacker but could be further strengthened by providing more detailed breakdowns of the multi-resolution stacking process and additional comparisons with relevant baselines, such as N-HiTS. Relation To Broader Scientific Literature: The paper introduces a novel perspective on time-series forecasting by emphasizing the importance of frequency-domain analysis and multi-resolution decomposition, challenging the traditional focus on purely temporal correlations. This approach aligns with prior research on multi-scale modeling (e.g., N-HiTS, TimeMixer) but distinguishes itself by explicitly leveraging frequency-based self-attention and progressive stacking to capture both global trends and local variations. By grounding its methodology in time-frequency analysis and the uncertainty principle, the paper contributes to a broader shift in the field, encouraging researchers to rethink conventional time-series modeling paradigms. While the proposed framework builds on existing ideas, it provides a cohesive and theoretically justified approach, which could inspire further advancements in handling nonstationary time-series data. Essential References Not Discussed: Challu, Cristian et al. “N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting.” AAAI 2023 Other Strengths And Weaknesses: **Strengths:** 1. The paper is well-structured and intuitive, making it easy to follow. The proposed multi-resolution stacking and frequency-based self-attention mechanisms are clearly explained, allowing readers to grasp the motivation behind TimeStacker without excessive complexity. 2. The mathematical foundations provided through Definition 3.1 and Theorem 3.2 effectively support the proposed approach, strengthening its conceptual motivation and distinguishing it from purely empirical contributions. **Weaknesses:** 1. While the paper presents a well-motivated framework, its core ideas (multi-resolution analysis and frequency-domain modeling) are not fundamentally new, as similar approaches have been explored in models like **N-HiTS** and Fourier-based architectures (e.g., FEDformer, FiLM). A clearer articulation of how TimeStacker differs from or improves upon these methods would enhance its originality. 2. The paper does not adequately differentiate itself from prior work that employs hierarchical decomposition and frequency modeling. Explicit comparisons—both in theoretical discussion and experimental evaluation—would strengthen the argument for its contribution. 3. Additional points regarding experimental design, missing citations, and potential improvements have been detailed in responses to individual review questions. Overall, while the paper is well-written and theoretically grounded, addressing its novelty concerns and improving its positioning relative to prior multi-resolution models would make the contribution more compelling. Other Comments Or Suggestions: Some figures appear blurry and lack clarity, making it difficult to interpret fine details. Improving the resolution and contrast of the figures would enhance readability. Questions For Authors: 1. How does TimeStacker differentiate itself from other multi-resolution approaches in time-series forecasting? - Several existing methods, such as N-HiTS and other hierarchical decomposition models, already leverage multi-resolution processing. Could you clearly articulate the key differences between TimeStacker and these models in terms of both methodology and performance? - A more explicit comparison could help clarify the novelty and contribution of the proposed approach. 2. In what scenarios does TimeStacker outperform existing frequency-domain-based models? - The paper emphasizes the advantages of analyzing time-series data in the frequency domain, but how does TimeStacker compare against prior frequency-aware models such as FEDformer, FiLM, or other Fourier-based architectures? - Are there specific types of datasets, forecasting horizons, or signal characteristics where TimeStacker’s design proves particularly effective? Including such insights would strengthen the empirical justification for the proposed method. 3. Does applying smoothness in the inter-patch frequency-based attention module conflict with the goal of capturing local features? - The smooth layer is stated to reduce noise within patches, yet patches themselves are meant to capture localized variations in the time series. - Would excessive smoothing risk removing important short-term patterns or distort fine-grained structures? - Could you provide any empirical justification or ablation studies showing how this trade-off impacts performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for your recognition of our work and your constructive feedback. **Q1&W1:** Our approach fundamentally differs from multi-resolution and Fourier‑based methods. While the latter emphasize extracting static features from various frequency bands—that is, observing the signal at different granularities—TimeStacker focuses on capturing the dynamic evolution within the input signal to reveal its underlying transformation patterns. For this reason, we refer to our method as **“multi‑level”** rather than **“multi‑resolution”**. In TimeStacker, a sequence $$ X = \{x_1, x_2, x_3, ..., x_t\} $$ is first partitioned into K segments (ensuring that t is divisible by K), resulting in a new sequence $$ \hat{X}_k= \{X_1, X_2, ..., X_{t/k}\},X_i = \{x_{(i-1)\times l + 1}, x_{(i-1)\times 1 + 2}, ..., x_{i\times l}\}, l=t/K $$ Next, we compute the similarity between these subsequences in the **frequency domain**—essentially using a window of size K with stride K to observe the internal evolution of the sequence. Based on this similarity, we aggregate the subsequences in the **time domain** to produce an output sequence of length t. By continuously varying the window shape (i.e., reducing the observation window K) and repeating the process across multiple levels, we break the constraints imposed by Theorem 3.2 to capture the variation patterns of the Fourier coefficients *a* and *b* as defined in Definition 3.1. In contrast, multi-resolution methods such as N‑HiTS primarily extract signal features via downsampling. For example, given a sequence $$ X = \{X_1, X_2, X_3, ..., X_t\}, $$ downsampling with a stride of K produces a new sequence   $$ \hat{X}_k = \{x_1, x_{2k}, x_{3k}, ..., x_t\}, $$ followed by interpolation for prediction. The process is then repeated with smaller strides to capture information at different granularities along the entire sequence. **Q2&W2&W3:** At a detailed level, TimeStacker leverages frequency‑domain information to observe the dynamic evolution of a sequence. Its process can be summarized as follows: > Sequence → Transformation (Frequency Domain) → Compute variation patterns (Fourier coefficients *a* and *b*) among subsequences → Aggregate in the Time Domain based on these patterns → Sequence → Prediction This approach also helps mitigate errors that can arise from the inverse transformation using discrete orthogonal bases. In contrast, other Fourier‑based architectures (e.g., FEDformer, FiLM) project the time‑domain sequence onto a Fourier (or other orthogonal) basis, enhance the features in the frequency domain, and then apply an inverse transformation to return to the time domain. Their process can be abstractly described as: > Sequence → Transformation (Frequency Domain) → Feature Enhancement → Inverse Transformation (Time Domain) → Sequence → Prediction Below is a preliminary comparison between TimeStacker and baseline models (N‑HiTS, FEDformer, and FiLM). The first value in each cell represents MSE and the second represents MAE. A more comprehensive comparison will be added to the main text and appendix. | Dataset | TimeStacker | N‑HiTS | FEDformer | FiLM | | ----------- | ------------- | ------------- | ------------- | ------------- | | ETTm2 | 0.274 / 0.316 | 0.279 / 0.330 | 0.305 / 0.349 | 0.287 / 0.329 | | Electricity | 0.194 / 0.275 | 0.186 / 0.287 | 0.214 / 0.327 | 0.223 / 0.302 | | Traffic | 0.508 / 0.335 | 0.452 / 0.311 | 0.610 / 0.376 | 0.637 / 0.384 | | Weather | 0.243 / 0.264 | 0.249 / 0.274 | 0.309 / 0.360 | 0.271 / 0.291 | **Q3:** We define the output of our smoothing layer as ***SmoothLayer(x) + x***, as presented in Equation (13) of our paper, incorporating a residual connection. This mechanism ensures that even if *SmoothLayer(·)* excessively smooths the signal—thereby potentially losing local features—the residual branch can effectively compensate by reintroducing these features. Consequently, the approach minimizes noise interference while preserving key information, thus enhancing the model’s expressive power. **W3:** We appreciate your suggestions. We will add the relevant essential references to the main text and plan to release the code publicly in the near future. Additionally, we will include a complete comparison with N‑HiTS, FEDformer, and FiLM, and, based on the feedback from all reviewers, we will augment our experimental data to more comprehensively demonstrate the advantages of our approach and further enrich the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' explanation and additional results. I believe that the additional empirical results provided in response to my questions and those of other reviewers should be included in the revised paper and would significantly strengthen it. I respect the other reviewers' comments and I will maintain my score.
Summary: This paper is another incremental work in developing Transformer-based time-series architectures and follow some widely used yet problematic benchmarks, such as ETT, Exchange, Weather, etc. Claims And Evidence: This paper claims its proposed approach may better tackle non-stationary signals in time-series forecasting. Methods And Evaluation Criteria: The benchmark datasets and the whole research stream of many compared baselines have some problems. I suggest the authors to watch the talk in NeurIPS 2024 Time Series Workshop, https://cbergmeir.com/talks/neurips2024/, and adjust your evaluation benchmarks. - Actually, the exchange dataset is not a proper testbed to compare deep forecasting models. A naive baseline will excel, a lot. Why your deep learning model can win in this case but fall short on Traffic and Electricity, where more predictable patterns exist. - Your model also excels on Weather dataset, however, every meteorologist will tell you that everything further than 2 weeks into the future is essentially rolling a dice. Forecasting 720 points means 720 / 24 = 30 days out. - The input window length is a hyperparameter. Restricting to small input lengths (such as 64) is favouring more complex models over simpler ones. Theoretical Claims: N/A Experimental Designs Or Analyses: My major concerns are about the invalidity of evaluation benchmarks and compared baselines. Supplementary Material: Yes. Relation To Broader Scientific Literature: Limited relation. Essential References Not Discussed: If this paper cares about non-stationarity, maybe some existing normalization methods should be mentioned, discussed, and compared if possible. For example, can your proposed architectures handle non-stationarity without using RevIN [1]? If not, why the proposed modules tackle the non-stationarity challenge and how to demonstrate that? [1] Kim, T., Kim, J., Tae, Y., Park, C., Choi, J.-H., and Choo, J. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations, 2021. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please share your thoughts or comments after watching the talk or slides in https://cbergmeir.com/talks/neurips2024/. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to comment on our work. We would like to clarify several points and explain the motivations behind methodology and evaluation protocol. **Q1:** *“incremental” Transformer-based time-series model.* **R1:** We acknowledge that Transformer-based approaches have become pervasive in time-series research. However, the TimeStacker model goes beyond a straightforward incremental tweak by **introducing a stacking mechanism and a frequency-based self-attention module**. These contributions distinguish TimeStacker from earlier Transformer-based architectures, as they specifically address the challenges of non-stationarity modeling—issues that are well documented in both academic and applied settings. **Q2:** *Every meteorologist ... dice.* **R2:** Why can meteorologists infer that global warming is accelerating[1] based on historical weather data? The difficulty in precisely characterizing a phenomenon does not imply that it is uncharacterizable. The goal of machine learning is to learn how to make reasonable inferences from historical data, much like human experts, thereby alleviating tedious tasks (learning from human beings and learning for human beings). Consequently, our aim is not to predict the weather exactly two weeks in advance, but rather to enable the model to make sound inferences from historical data—assisting laypersons in data analysis and decision making while allowing experts to focus on deeper theoretical research. **Q3:** *“widely used yet problematic” datasets (ETT, Exchange, Weather, etc.) ... benchmarks.* **R3:** We respectfully disagree with the notion that employing these long-standing community benchmarks—which have been cited and scrutinized by thousands of published works—invalidates our research or the broader domain of Transformer-based time-series forecasting. Although no benchmark is perfect, these datasets encompass varied characteristics and provide common reference points that promote cumulative progress in the field. Reproducible research benefits from established baselines, and abandoning them would break continuity with a large body of existing work. We acknowledge the insights presented in the “NIPS 2024 Time Series Workshop” talk and value any new perspectives it may offer. However, a single presentation—especially one from a specialized forum—cannot unilaterally dismiss the robust, peer-reviewed benchmarks that have been used by the global research community for many years. While we recognize that these datasets have limitations (as do all benchmarks), they continue to offer a practical and widely accepted foundation for performance comparison. Given that many top-tier conference papers utilize these benchmarks, it is essential for any new approach, including ours, to demonstrate its merits on them. Our objective is to **explore the historical evolution of sequences in order to capture their underlying dynamic patterns**, rather than merely performing forecasting. **Q4:** *performs well on Exchange and Weather but “falls short” on Traffic and Electricity ... predictable.* **R4:** We respectfully note that our experiments do not indicate that TimeStacker “falls short” on these datasets; our reported results are generally comparable to or better than many baselines. Moreover, TimeStacker is specifically designed for modeling non-stationary signals rather than for multivariate data. As demonstrated in our experiments in **Appendix D.3**, when we reduced the number of variables in the Traffic and Electricity datasets, our approach achieved superior performance under the same conditions. Our goal in including Traffic and Electricity is to offer comprehensive comparisons on standard community benchmarks, thereby ensuring that readers gain a complete understanding of TimeStacker’s performance across different data regimes. TimeStacker has shown its strengths in more volatile domains where underlying patterns are less predictable. This aligns with our primary focus: robustly handling non-stationarity rather than excelling on any single type of time-series problem. **Q5:** *Restricting input length ... ones.* **R5:** The choice of input window length generally follows standard practices in the literature. Moreover, the window length should be viewed as a reflection of the model’s ability rather than a mere hyperparameter. In our experiments, we have evaluated multiple window lengths (in **Section 4.3, Model Analysis**) and report results using window sizes commonly adopted in prior works to ensure comparability. **Q6:** *Regarding the use of Revin for regularization.* **R6:** As indicated by Equations (1) and (2), this operation simply standardizes the model input to a common scale without altering the stationarity of the signal. In non-stationary signal research, this is considered fundamental. [1] Xu, Yangyang, et al. “Global Warming Will Happen Faster than We Think.” *Nature*, Dec. 2018.
Summary: The paper introduces TimeStacker, a novel time series forecasting framework designed to handle nonstationary signals effectively. The proposed approach utilizes a multi-level stacking mechanism, aggregating patches of varying sizes to capture both local and global frequency-domain features. Additionally, a frequency-based self-attention module (FreqAttention) is introduced, which computes similarity in the frequency domain while aggregating in the time domain. The authors claim that TimeStacker achieves state-of-the-art performance across several real-world datasets, outperforming recent Transformer- and MLP-based forecasting models with fewer parameters and better computational efficiency. Claims And Evidence: The author claims that their method achieves state-of-the-art performance across multiple real-world datasets. This is mostly validated by the empirical experiments on 8 datasets, though the proposed method doesn't achieve the best on traffic and electricity datasets. The author also claims that the proposed method has fewer parameters and higher computational efficiency, this is supported by comparison in training time and memory footprint. The comparison shows that the proposed method is somewhat efficient but is not the most efficient one. Methods And Evaluation Criteria: The TimeStacker framework is designed to handle nonstationary patterns in time series data. The core components align well with this goal: - Multi-level patch stacking captures both local and global features, addressing the issue of fluctuating frequency characteristics in nonstationary time series. - FreqAttention (frequency-based self-attention) leverages frequency-domain similarity rather than time-domain patterns, which is a reasonable approach for handling signals with evolving spectral properties. The evaluation is on 8 datasets and use MSE and MAE as the metrics which is common in time series analysis. However, the test data is not specifically for non-stationary data. Some tests with synthetic data are encouraged. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment design and analysis are solid. The proposed method is compared with 9 state-of-the-art methods on 8 different datasets. The authors have conducted extensive ablation studies. Experiments of efficiency and look-back lengths let the readers understand the performance of TimeStacker better. Supplementary Material: A. dataset description has been reviewed Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: In line 165, "theSmooth Layer" misses a space Questions For Authors: 1. Can you add synthetic experiments on non-stationary data? 2. Can you provide an ablation study showing how TimeStacker adapts to different types of nonstationary signals compared to other models? Can you provide mean, standard deviation for MSE and MAE across multiple runs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your appreciation and constructive comments. **Q1:** The synthetic data is constructed by randomly selecting 30 frequencies, with the amplitude corresponding to each frequency varying over time. The experimental results are visualized in this anonymous URL (https://i.postimg.cc/pT66jQHH/synthetic-exp.png). More detailed information will be provided in the appendix. **Q2:** To demonstrate how TimeStacker adapts to various non-stationary signals, we configured the parameter *Patch Size List* and conducted experiments on the ETTm1 dataset. The results are shown in the following: | Patch Size List | [16,16,16,16] | [16,16,16,24] | [16,16,16,32] | [16,16,16,48] | [16,16,24,32] | [16,16,24,48] | [16,24,32,48] | | --------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | MSE | 0.465 | 0.468 | 0.468 | 0.465 | 0.463 | 0.463 | 0.460 | | MAE | 0.433 | 0.439 | 0.436 | 0.431 | 0.431 | 0.430 | 0.428 | These results indicate that employing various window combinations can more effectively capture the underlying dynamic patterns of the sequence, thereby improving prediction performance. The mean and standard deviation of the results from multiple runs(5 different seed) of experiments are provided as following (mean/std): | Dataset | ETTh1 | ETTh2 | ETTm1 | ETTm2 | Traffic | Electricity | Weather | Exchange | | ------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | MSE | 0.433/0.00145 | 0.368/0.00091 | 0.381/0.00119 | 0.274/0.00061 | 0.508/0.00052 | 0.194/0.00056 | 0.243/0.00092 | 0.336/0.00101 | | MAE | 0.423/0.00167 | 0.390/0.00057 | 0.381/0.00052 | 0.316/0.00042 | 0.335/0.00087 | 0.275/0.00077 | 0.264/0.00042 | 0.389/0.00137 | Once again, we appreciate your thorough review of our manuscript. The typo in “theSmooth Layer” has been corrected.
null
null
null
null
null
null
Prior Knowledge Guided Neural Architecture Generation
Accept (poster)
Summary: The authors propose a Neural Architecture Generation (NAG) method based on Prior knowledGe (PG, for PG-NAG). NAG techniques extend Neural Architecture Search (NAS) to discover the features such as operations and subgraphs that contribute to high performance. This is achieved using Shapley values. Specifically, importance's are initialized as the prior knowledge of the diffusion model and the Shapley values from the top-20 architectures are used to initalize prior knowledge. PG-NAG is evaluated on several benchmarks such as NAS-Bench-101, NATS-Bench, TransNAS-Bench, etc. **Post-Rebuttal Response** I thank the authors for their detailed response. While my score will remain as weak reject, I quote and emphasize the direct language of how ICML perceives such a score: "Weak reject (i.e., leaning towards reject, but could also be accepted)" *Reasons to raise score* - The rebuttal is pretty well-detailed and provides good factual clarification on the workings of the paper. - The justification for the performance drop of PG-NAG compared to other techniques is mostly that PG-NAG is faster/more efficient and the authors emphasize this. They are encouraged to place further emphasis on this advantage in the paper, e.g., BOHB taking 12k seconds per Table 2.   *Reasons to lower score* - The rebuttal's key weakness is the response regarding Table 1. To quote the rebuttal "We apologize for the confusion. We report the best performance, selecting the top architecture from five runs of PG-NAG." That the authors are showing the best result for **their** method yet the average result for **other** methods is bizarre and conspicuous. As the main result for the paper, it is literally an apples-to-oranges comparison that cannot be overlooked. Claims And Evidence: A claim of this paper is that search spaces are large, necessity smarter techniques for performance evaluation. While it is indeed true that large search spaces prohibit evaluation, there are existing techniques that overcome this burden. Methods And Evaluation Criteria: The method is trained and evaluated on several standard benchmarks like DARTS, etc. The evaluation experimental setup raises no alarms. Theoretical Claims: This is difficult to evaluate as the paper's methodology is not well-written and difficult to understand, even after performing multiple passes over the methodology section. Specifically, better visual examples would help a lot. One idea the reviewer takes issues with is that this method relies on knowing which architectures are good ahead of time, and which are bad. PG-NAG trains using only a small subset of cherry-picked architectures. Experimental Designs Or Analyses: Experimental design is sound but not noteworthy. A primary weakness of this paper is that its results are not even incremental. The method does not provide clear and convincing proof of its efficacy to merit acceptance. Some examples: - Table 1: Line 260-263 (right column) states "To validate the effectiveness of PG-NAG in five new search spaces, we design five architectures for each search space", yet Table 1 only features one result from PG-NAG for ImageNet or CIFAR-10. Is it a mean value? A max? - Table 2 method loses on CIFAR-100 ad ImageNet16-120. - Table 3 the method loses on every task except Class Object and Room Layout - Table 5 the method loses compared to Random and DiffusionNAG in terms of max performance. Supplementary Material: I checked the supplementary material and focused on the results. For NAS-Bench-201, the method loses to DiffusionNAG on CIFAR-10, breaks even on CIFAR-100, and loses to L2NAS on ImageNet. For NAS-Bench-101, the method achieves good performance but there is missing related work (discussed later). Relation To Broader Scientific Literature: PG-NAG extends the idea of directly generating or building high-performance architectures from a search space as opposed to using a search algorithm. There are several related works in this field such as DiffusionNAG [1], a primary related work that is cites. There is also AutoBuild [2], which is cited but not compared to (different benchmarks). This paper also lightly deals with the concept of finding the good/bad architecture subcomponents, thus falls under the banner of interpretable NAS, like AutoBuild and others. Essential References Not Discussed: For NAS-Bench-101 comparison in Table 10, a missing related work would be GA-NAS [3] which also obtains 94.23%. For the idea of finding relevant architecture subcomponents you have NAS-Bowl [4] which is not cited. Other Strengths And Weaknesses: Presentation quality of this paper is definitively below the bar for an A* conference like ICML. As stated before the method is hard to understand and grasp in key sections, e.g., computation of Shapley values and `visualizing` how the method actually works. Additionally, float formatting leaves much to be desired. Fig. 1/2 and Tab. 1 in the main manuscript are decently formatted, but the rest are not, e.g., either too small scalebox, too small text. Other Comments Or Suggestions: References: [1] "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" - ICLR'24 [2] "Building Optimal Neural Architectures using Interpretable Knowledge" - CVPR'24 [3] "Generative Adversarial Neural Architecture Search" - IJCAI-21 [4] "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels" - ICLR'21 Questions For Authors: What factors influence architecture selection criteria exactly? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank you for the recognition of our effective evaluation. We are also grateful for the valuable feedback. ## Theoretical Claims >Visual examples We add a Flowchart (_https://anonymous.4open.science/r/PGNAG/flowchart.png_). >Concerns about prior knowledge PG-NAG aims to train high-performance architectures from benchmarks and transfer the knowledge to generate architectures in other datasets. Experiments show that if we select architectures from a narrow performance range, the generated architectures still achieve competitive results (weakness 1 in Reviewer PwDK). This demonstrates that PG-NAG can learn the design principles. ## Experimental Designs >Table 1 We apologize for the confusion. We report the best performance, selecting the top architecture from five runs of PG-NAG. >Table 2 PG-NAG only lose to BOHB on CIFAR-100 and ImageNet16-120. However, it is important to note that PG-NAG runs $2.89$x faster than BOHB, i.e., BOHB requires about $12,000$ seconds yet PG-NAG only takes $4,147$ seconds to generate the high-performance architecture. >Table 3 Though PG-NAG does not outperform other methods on every task, it achieves the highest average rank among all methods. Meanwhile, PG-NAG also shows significant gains on Class Object and Room Layout. The results demonstrate the generalization capability of it across diverse tasks. >Table 5 Although PG-NAG shows a marginal 0.01% lower max performance compared to Random and DiffusionNAG, it is important to note that DiffusionNAG requires querying an architecture on the target dataset, while PG-NAG does not. Thereby enhancing the generalizability of PG-NAG. Compared to random, PG-NAG has higher minimum and mean values, and can consistently generate high-performance architectures. ## Supplementary Material It's important to note that DiffusionNAG and L2NAS require querying additional architectures on the target dataset, while PG-NAG does not. Specifically, DiffusionNAG queries one architecture, and L2NAS queries $1,000$ architectures. The extra querying demands additional computational resources and reduces the generalization capabilities. To compare, we utilize the same number of queried architectures with the two methods in PG-NAG. The accuracies are 94.31% for NAS-Bench-101, 94.37%, and 47.31% for CIFAR-10 and ImageNet-16-120 in NAS-Bench-201, which are higher than that of DiffusionNAG and L2NAS. ## Essential References We added the two methods in the new manuscript. GA-NAS has the same accuracy as PG-NAG, while NAS-Bowl underperforms at 94.2% accuracy compared to PG-NAG. However, GA-NAS requires an additional 150 architectures in NAS-Bench-101 compared to PG-NAG. ## Weakness >Presentation quality Thank you and we will thoroughly polish the manuscript. Subsequently, we provide a detailed explanation of the computation process for Shapley values and how the PG-NAG works. * Shapley values: Shapley values are used to quantify the importance of each operation to overall performance. We provide a detailed flowchart of this computation process in Figure (_https://anonymous.4open.science/r/PGNAG/shapley%20value.png_). Specifically, we remove a specific operation and measure the performance difference compared to the original architecture. This performance drop reflects the marginal contribution of the operation, the view of the performance drop is visualized in Appendix Figure 7. After computing Shapley values, we further visualize the relationships among these operations through heatmaps for NAS-Bench-101 (_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-101.pdf_), NAS-Bench-201 (_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-201.pdf_), and NAS-Bench-301(_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-301.pdf_). * Visualization of PG-NAG: We add a flowchart (https://anonymous.4open.science/r/PGNAG/flowchart.png). It begins with constraints of search space. For example, architectures are constrained to have six operations and four nodes in NAS-Bench-NLP. Then a diffusion model is trained for architecture generation. This diffusion model begins with extracting prior knowledge from the three benchmarks using Shapley values, then incorporates this guidance into the model to learn the architecture design principles. >Format of the manuscript We modified all the charts in the manuscript. ## Question: The influencing factors are as follows: * Prior knowledge: The quality of prior knowledge influences the architecture design principles learned by the PG-NAG (weakness 1 of Reviewer PwDK). * Noise schedule: Different noise schedules in the diffusion model affect the generation of architectures (weakness 2 of Reviewer PwDK). * Learning methods for operations and connections: We utilize GCN to learn the features of operations in the diffusion model. Cause the features of connections are hard to learn, we learn the features of subgraphs instead. The ablation study is in Table 6 in the main text.
Summary: This paper presents a novel method to enhance neural architecture generation using diffusion models. Instead of relying on predictor-based approaches, the authors train a diffusion model on graph representations of high-performing architectures. They further integrate explicit prior meta-knowledge extracted through Shapley value analysis to quantify the contribution of each component. This guidance steers the model to focus exclusively on generating high-quality architectures. Experimental results demonstrate that the method achieves promising performance improvements. Claims And Evidence: Most of the paper’s claims are supported by extensive empirical evidence. However, a couple of points warrant further scrutiny: - As noted in Lines 96-99, the authors train their generative model solely on high-performing architectures. This approach could limit the diversity of the derived prior knowledge. A deeper discussion or additional experiments examining the potential limitations and diversity issues of this focus would strengthen the claim. - Although the authors leverage prior knowledge to guide architecture generation with experimental results supporting this approach the evidence (see Table 8 in Section 4.4) suggests that the performance gains may primarily stem from extracting knowledge from high-performing architectures. This raises the question of whether the observed improvements are due to the superior quality of the training data rather than the guidance mechanism itself. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to address the challenge of efficient neural architecture generation. **Methodology** Unlike existing diffusion-based architecture generation methods that rely on accuracy predictors derived from graph representations, the authors introduce a novel approach that integrates explicit prior knowledge. This prior knowledge is extracted from high-performing architectures using Shapley value analysis, which quantifies the contribution of each neural network operation. By combining this information with the graph representation of architectures, the diffusion model is trained to focus exclusively on generating high-quality designs. **Evaluation Criteria** The authors evaluate their method across several well-established NAS benchmarks, including DARTS, NAS-Bench-201, TransNAS-Bench-101, NAS-Bench-ASR, and NAS-Bench-NLP. They use standard metrics such as top-1 accuracy, computational cost (measured in GPU days), parameter counts, and FLOPs. This comprehensive evaluation framework provides a robust comparison against state-of-the-art approaches and demonstrates the efficiency and effectiveness of the proposed method. Theoretical Claims: The proposed method builds upon conventional diffusion models, as seen in previous architecture generation approaches, and its claims are consistent with established principles in the conditional diffusion literature. Experimental Designs Or Analyses: Overall, the experimental framework is robust and well-aligned with established practices in NAS research. The authors evaluate their method on popular NAS datasets and benchmarks, comparing it with several existing approaches. However, one concern is that DiffusionNAG which also employs a diffusion-based approach but relies on an accuracy predictor is not included in Table 1 and only appears in later results. Including DiffusionNAG in the initial comparison would provide a clearer and more comprehensive evaluation of the proposed method relative to similar diffusion-based techniques. Supplementary Material: No supplementary results. Relation To Broader Scientific Literature: By integrating prior knowledge into the generation process, the authors extend and enrich the current scientific literature in neural architecture design. Specifically, they replace a task-conditioned accuracy predictor with a prior derived from high-performing architectures. However, this approach raises the question of whether training exclusively on high-performing architectures is sufficient to overcome the benefits offered by dataset-conditioned accuracy predictor-based models. While the contribution may appear marginal at first glance, the underlying idea holds promise and could pave the way for further improvements. Essential References Not Discussed: The authors provide a comprehensive review of the most relevant work in this field, effectively contextualizing their contributions. This paper Other Strengths And Weaknesses: **Strengths:** - **Efficiency in Generation:** By incorporating prior knowledge, the method reduces the need for iterative evaluation or reliance on accuracy predictors during the architecture generation process. - **High-Quality Outputs:** The generated architectures achieve competitive performance, demonstrating that the guidance from high-performing designs is effective. **Weaknesses:** - **Dependency on Prior Knowledge Quality:** The method’s success heavily relies on a highly curated set of high-performing architectures. If the source data is biased or unrepresentative, the performance could be adversely affected. - **Limited Architectural Diversity:** Training solely on high-performing architectures may constrain the model’s ability to generate diverse and innovative designs, potentially limiting the exploration of novel architectures. This lack of diversity is shown in Figure 3 and Table 5. since the method focused only on high performing architecture. - **Lack of Dataset Conditioning:** Unlike DiffusionNAG, which uses dataset-conditioned sampling to tailor generation for unseen datasets, this method operates via blind generation, potentially limiting its adaptability to new tasks. - **Attribution of Performance Gains:** There is a concern that the observed performance improvements might primarily stem from the high-quality training data rather than the inherent advantages of the proposed method itself. Other Comments Or Suggestions: - Conduct a thorough investigation of the diversity of the generated architectures relative to the pretrained architectures. - Include DiffusionNAG in Table 1 for a direct performance comparison. - Explore the potential benefits of incorporating dataset-specific knowledge into the generation process. Questions For Authors: 1. **Architecture Retrieval:** The sampling process appears to resemble a retrieval of high-performing architectures, since the conditioning is fixed based on a set of already high-performing designs. Could you clarify how this method differs from a simple retrieval mechanism, and what ensures that truly novel architectures are generated? 2. **Variance and Diversity:** How do you explain the observed low variance in the generated architectures? Could this be an indication that the method is overly focused on high-performing designs, thereby potentially limiting diversity and innovation? 3. **Comparison with DiffusionNAG:** DiffusionNAG uses a dataset-conditioned accuracy predictor, while your approach removes this predictor. How does your method ensure that it generates high-performing architectures for unseen datasets? Is it possible that the performance gains are primarily due to the fact that the training architectures perform similarly well on unseen datasets? 4. **Incorporating Dataset Information:** What are the challenges in integrating dataset-specific information into your generation process? It seems that blindly generating architectures may be effective only if the pretrained set is highly representative. Could you discuss the potential difficulties or limitations in incorporating dataset conditioning into your approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our efficiency, high-quality outputs, and robust experiments. We are grateful for the constructive feedback. ## Claims and Evidence >Potential limitations and diverse generation We discuss potential limitations regarding the quality of prior knowledge (weakness 1 in Reviewer PwDk). Then we prove PG-NAG can generate diverse architecture (weakness 3 in Reviewer PwDk). >Effectiveness of guidance mechanism We conduct experiments that indicate the guidance mechanism indeed plays a critical role. Specifically, we replace prior knowledge with architectures of 40%-50% and 80%-90% accuracy. PG-NAG generates architectures whose performance matches the provided architectures, demonstrating its effectiveness. |Architectures|CIFAR-10|CIFAR-100|ImageNet-16-120| |:-:|:-:|:-:|:-:| |40%-50%|75.81|47.13|15.66| |80%-90%|88.37|60.22|35.30| |top-20|94.36|73.51|46.34| ## Weakness >Dependency on prior knowledge quality The dependency is discussed in weakness 1 in Reviewer PwDk. >Limited architectural diversity We discuss the diversity in claims and evidence 1. >Lack of Dataset Conditioning Indeed, PG-NAG has performed the dataset conditioning. Specifically, we incorporate constraints to guide the process for different datasets to achieve a similar effect as dataset conditioning. Unlike DiffusionNAG, PG-NAG successfully applied to a wider range of tasks such as speech recognition in Table 4 of the main text. >Attribution of performance gains The performance gains of PG-NAG are mainly attributed to the guidance mechanism instead of the high-quality training data. The effectiveness of the guidance mechanism is demonstrated in claims and evidence 2. ## Other Comments >Diversity of generated architectures We investigate the diversity regarding the parameters and distribution of generated architectures. A visualization of the accuracy and params of 20 generated architectures is in Figure(_https://anonymous.4open.science/r/PGNAG/generated%20architectures%20in%20NAS-Bench-201.pdf_), it demonstrates that PG-NAG can generate diverse architectures. > Include DiffusionNAG We added the mean accuracy of DiffusionNAG on CIFAR-10 (97.39%$\pm$0.01), which is lower than PG-NAG (97.48%$\pm$0.08). >Potential benefits of incorporating dataset-specific knowledge Incorporating dataset-specific knowledge could enhance accuracy. PG-NAG achieves top performance on NAS-Bench-101 in Appendix Table 1. To compare, we remove architectures from NAS-Bench-101 in prior knowledge and the results can be seen below. This shows that dataset-specific knowledge could make PG-NAG more precise. |Prior knowledge|Ranking|Acc(%)| |:-:|:-:|:-:| |no NAS-Bench-101|0.004|94.08| |PG-NAG|0.001|94.23| ## Questions >Architecture retrieval * Differences: A retrieval method needs to learn the target dataset to obtain the conditions. In contrast, PG-NAG learns from existing benchmarks and then transfers the learned design principles across different tasks, without the learning process on the target dataset. * Ensure the novelty: PG-NAG introduces noise to explore variations beyond the benchmarks. Figure(_https://anonymous.4open.science/r/PGNAG/differences%20in%20generated%20architecture%20in%20TransNAS-bench-101%20and%20prior%20knowledge.png_) illustrates the generated architectures for TransNAS-Bench-101 are different from the learned architectures in NAS-Bench-201. >Variance and diversity Please see weakness 2. >Comparison with DiffussionNAG * Ensure generating high-performance architecture: We replace the predictor with a guidance mechanism, whose effectiveness has been discussed in claims and evidence 2. * Reason for performance gains: The performance gains cannot be attributed to the decent performance of training architectures on unseen datasets, it should be attributed to the effectiveness of the guidance mechanism based on prior knowledge. For example, NAS-Bench-NLP includes linear operations absent in prior knowledge, showing generated architectures adapt to new tasks. Additionally, prior knowledge is used for image classification, our tasks include speech recognition and natural language processing. >Incorporating data information * Challenges brought by incorporating dataset include reduces in generalization and the need for encoding. PG-NAG aims to generalize across datasets rather than overfitting to a specific one. While dataset-specific information can improve accuracy, it may reduce generalizability. Additionally, PG-NAG learns architecture design principles from different datasets, these architectures need to be represented in a unified format. * The difficulties preliminary come from the need to choose appropriate datasets and ensure effective learning across them. First, we need to select which dataset to be incorporated. To enhance the generalization, we choose three widely used NAS benchmarks. Second, to ensure effective learning across datasets, we encode architectures using a unified DGL graph representation.
Summary: This paper proposes a method, Prior Knowledge Guided Neural Architecture Generation, to efficiently generate high-performance neural architectures without the need for an exhaustive search and evaluation process. The key idea is to leverage prior knowledge extracted from high-performance architectures to guide a diffusion model for architecture generation. The method is validated on several search spaces, including DARTS, NATS-Bench, TransNAS-Bench-101, NAS-Bench-ASR, and NAS-Bench-NLP, achieving state-of-the-art performance with significantly reduced computational cost (0.004 GPU days for ImageNet). ## update after rebuttal After the rebuttal, I feel my concerns have been well-discussed by the authors, and some of weaknesses has been fixed. Therefore, I choose to raise my positive scores. As we see, the majority of reviewers are lean towards to accept this submission, maybe we ask the Reviewer xYiC who holds the only negative score to further discuss if his/her concerns have been addressed. Claims And Evidence: Some claims (e.g., efficiency and accuracy) are well-supported by experimental results. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria generally make sense for the problem of neural architecture generation. Theoretical Claims: Yes, I examined the theoretical claims related to the use of Shapley values(Equations 3 and 4) and the diffusion model (Equations 5, 6, and 7). Experimental Designs Or Analyses: Yes, I reviewed the soundness and validity of the experimental design and analysis in the paper, focusing on the benchmark selection, comparison methods, performance metrics, and ablation studies. Supplementary Material: Yes, I reviewed the supplementary material. Relation To Broader Scientific Literature: The submission provides the discussion of the relevant literature. Prior work required iteratively evaluating a large number of architectures, and the manuscript provides a more efficient way to automatically generate architectures. Essential References Not Discussed: This paper already has a comprehensive literature review. Other Strengths And Weaknesses: Other Strengths, **Efficiency.** PG-NAG eliminates the need for costly architecture evaluations, requiring only 0.004 GPU days to generate architectures with competitive performance. **The use of Shapley values.** Shapley values to quantify the contribution of each operation and connection is new and grounded in cooperative game theory. **Generality.** PG-NAG shows strong performance across diverse search spaces and tasks, including vision, speech, and language. Other Weaknesses, **Limited Discussion on Prior Knowledge.** The manuscript does not provide enough detail on the potential limitations or biases introduced by relying on prior knowledge from existing benchmarks. **Fixed Noise Schedule.** The noise schedule in the diffusion model is fixed (sigmoid). However, different search spaces and architecture complexities might require different noise schedules. **Size and Performance Trade-off.** The paper focuses heavily on accuracy but does not explore the trade-off between model size and accuracy. **Improve Interpretability.** The manuscript should Include analysis or visualization of why the generated architectures perform well. Other Comments Or Suggestions: Refer to the Weaknesses section Questions For Authors: Refer to the Weaknesses section Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing our efficiency, novel use of Shapley values, and generality. We are also grateful for the valuable feedback. ## Weakness > Limited discussion on prior knowledge A potential limitation is that the quality of prior knowledge affects the accuracy of generated architectures. Specifically, we compare different strategies for obtaining prior knowledge. In our main text, we select each top-20 high-performance architectures from the three benchmarks. To compare with it, we use the following sample strategies: architectures uniformly sampled across a performance range of 0%–100%, architectures uniformly sampled from two narrower ranges (i.e., 90%-100% and 80%–90%). The architecture accuracy we used is the validation accuracy in CIFAR-10. To see the distribution of architectures more intuitively, we take the benchmark NAS-Bench-101 as an example and visualize the performance and parameter distribution of architectures under different strategies in the Figure (_https://anonymous.4open.science/r/PGNAG/prior.png_). As shown in the table below, architectures randomly sampled in a narrow range of high performance can also generate high-performance architecture. This demonstrates that the quality of prior knowledge is crucial for the effectiveness of PG-NAG. |Method|CIFAR-10|CIFAR-100|ImageNet-16-120| |:-:|:-:|:-:|:-:| |sample in 0%-100%|87.11|56.95|30.57| |sample in 80%-90%|88.37|60.22|35.30| |sample in 90%-100%|93.23|70.06|43.02| |top-20|94.36|73.51|46.34| > Fixed noise schedule The sigmoid noise schedule is fixed because it is effective when different search spaces and architecture complexities are adopted. Specifically, we conduct experiments in NAS-Bench-201 and DARTS search spaces, comparing the performance of sigmoid, linear, and cosine noise schedules. We evaluate PG-NAG on CIFAR-10, CIFAR-100, and ImageNet-16-120 in NAS-Bench-201, and evaluate PG-NAG on CIFAR-10 in DARTS. The table below shows the results that the sigmoid schedule achieves the best performance regarding different search spaces and architecture complexities. | Noise Schedule | CIFAR-10 | CIFAR-100 | ImageNet-16-120 | CIFAR-10 in DARTS | |:-:|:-:|:-:|:-:|:-:| |linear|93.42| 70.90 | 45.33 | 97.40 | |cosine| 93.50 | 70.67 | 44.53 | 97.54 | |sigmoid| 94.36 | 73.51 | 46.34 | 97.56 | > Size and performance trade-off We would like to clarify that PG-NAG can handle the trade-off between model size and accuracy. This is because the prior knowledge already covers architectures with both high performance and diverse parameter counts, as visualized in Figure (_https://anonymous.4open.science/r/PGNAG/generated%20architectures%20in%20NAS-Bench-201.pdf_). We control the model size of the generated architectures when needed. As shown in the Table below, in the first three lines we control the model size and the last one is the results we reported in the main text, From the results, we can find that when the model size becomes smaller, the performance of PG-NAG does not drop significantly, demonstrating its ability to balance model size and performance effectively. | FLOP (M) | Params (MB) | Latency (ms) | CIFAR-10 | CIFAR-100 | ImageNet-16-120 | |:-:|:-:|:-:|:-:|:-:|:-:| |121.82|0.858|21.41|93.50|70.67|44.53| |149.34|1.045|19.97|94.02|72.99| 45.44 | | 153.27 | 1.073 | 20.22 | 94.37 | 73.22 | 46.71 | | 184.73 | 1.289 | 20.59 | 94.36 | 73.51 | 46.34 | > Improve interpretability To interpret why the generated architectures perform well, we add two visualizations showing the impact of operations on architecture performance and the correlation between operations, respectively. Furthermore, we analyze that prior knowledge can effectively ensure the generation of high-performance architectures. * Operation-wise Influence on Performance: In Appendix Figure 7, we visualize the impact of different operations on architecture performance across various search spaces. This helps illustrate the contribution of each operation to overall accuracy. * Relevance of Operations in Benchmarks: We provide visualizations of operation relevance in NAS-Bench-101 (_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-101.pdf_), NAS-Bench-201(_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-201.pdf_), and NAS-Bench-301 (_https://anonymous.4open.science/r/PGNAG/Shapley%20values/NAS-Bench-301.pdf_), which are used to obtain prior knowledge. Shapley values quantify the marginal contribution of each operation, while heatmaps highlight the correlations between different operations for better interpretability. * Analysis of the effectiveness of PG-NAG: To further explain why our method performs well, we provide a detailed discussion in response to Reviewer WwRR’s Weakness 1, analyzing the interactions between architecture components and their impact on performance.
Summary: This paper proposes a neural architecture generation method called Prior Knowledge Guided Neural Architecture Generation (PG-NAG), which aims to generate high-performance neural architectures without the need for search and evaluation processes. By quantifying the contribution of each component within an architecture to its overall performance, the method identifies valuable prior knowledge and uses it to guide a diffusion model to generate architectures for various tasks. Extensive experiments demonstrate that PG-NAG achieves superior accuracy with minimal computational resources (e.g., generating architectures with 76.1% top-1 accuracy on ImageNet and 97.56% on CIFAR-10 in just 0.004 GPU days). The method also shows strong generalization across unseen search spaces like TransNAS-Bench-101 and NATSBench. ## Update after rebuttal The authors' rebuttal looks great to me. I am finally happy to raise the recommendation to clear accept. Claims And Evidence: The claims made in the submission are supported. Methods And Evaluation Criteria: Methods and evaluation criteria are appropriate. Theoretical Claims: There are no explicit theoretical proofs or claims that require verification. The focus of the manuscript is primarily on the empirical evaluation of PG-NAG rather than on theoretical analysis. Experimental Designs Or Analyses: The experimental setup and analyses appear to be well-structured and appropriate for assessing the claims made. Supplementary Material: I have seen the supplementary material, they are useful. Relation To Broader Scientific Literature: The paper is well-aligned with recent literature, like neural architecture search and diffusion models. Essential References Not Discussed: No significant references are missing. Other Strengths And Weaknesses: Pros: 1. The manuscript introduces a new approach to neural architecture generation by leveraging prior knowledge to guide the diffusion model, eliminating the need for traditional search and evaluation processes. 2. This work achieves high performance with extremely low computational costs (0.004 GPU days). 3. Extensive experiments and comparisons with baseline, thoroughly validating the effectiveness and efficiency of PG-NAG. Cons: 1. Providing deeper analysis and insights into why the proposed method works effectively can enhance the quality of the manuscript. 2. The manuscript involves evaluations across multiple tasks and benchmarks, which strengthens the robustness of the work. However, this also introduces a multitude of metrics. A detailed explanation of these metrics would be beneficial for understanding the contributions of the manuscript. 3. The prior knowledge extracted from existing benchmark datasets may not always represent the target tasks or domains. Further exploration is needed to understand how the selection of benchmark datasets impacts the effectiveness of the generated architectures 4. The motivation behind using Shapley values to quantify the contributions of components within an architecture needs to be better articulated. 5. Highlighting the best-performing results would make it easier for readers to quickly identify the strengths of their method. Other Comments Or Suggestions: Please see Strengths And Weaknesses. Questions For Authors: 1. How can the quality and diversity of prior knowledge be ensured when selecting high-performance architectures from benchmark datasets? 2. How does PG-NAG handle the trade-off between computational efficiency and the accuracy of generated architectures, especially when increasing the number of diffusion steps? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing our good performance, effectiveness, and efficiency. We are also grateful for the valuable and constructive feedback. ## Weaknesses >An analysis of how the method works effectively The effectiveness of PG-NAG can be attributed to prior knowledge guidance, operation feature extraction, and connection feature extraction. As shown in Table 6 in the main text, skipping or replacing any of these results in a decline in performance. Specifically, prior knowledge measures the contribution of each operation or connection in an architecture with Shapley values. Moreover, the operation feature extraction effectively learns representations of operations that contribute significantly to performance. These learned features are then integrated into the generation process. Similarly, the connection feature extraction learns the connections that are useful to high-performance architectures, ensuring the generation of high-performance architectures. >An explanation of metrics used in PG-NAG The explanations of the six metrics used are in Table (_https://anonymous.4open.science/r/PGNAG/metrics.md_). We added this table in the new manuscript. >How does the selection of benchmarks impact the effectiveness of generated architectures We leverage NAS-Bench-101, NAS-Bench-201, and NAS-Bench-301 to ensure the generated architectures perform well in basic image processing and large-scale complex tasks, exhibiting strong generalization capabilities across other tasks. * NAS-Bench-101 focuses on image classification, ensuring the generated architectures perform well on basic image tasks. The experiment shows that prior knowledge without NAS-Bench-101 has lower performance (comments 3 of VjZy). * NAS-Bench-201 contains simple but effective operations and has been the basis for various search spaces. Therefore, it can guarantee the generated architectures have generalization capabilities across different tasks. * NAS-Bench-301 supports complex architecture design for large-scale tasks, ensuring that the generated architectures are suitable for complex tasks like ImageNet. >The motivation for Shapley values The motivation behind using Shapley values lies in the cooperative relationship between operations and connections in architecture. Specifically, the operations and connections in architecture are not independent of each other, and they interact as a whole to determine the overall performance. Shapley value can quantify the contribution in a cooperative game, thus helping us identify the contribution of operations and connections to the architecture performance [1]. By incorporating this, PG-NAG can focus on generating architectures that emphasize these critical components, leading to higher-performance designs. >Highlight the best-performing results We have highlighted the best results in the new manuscript. ## Questions >How to ensure the quality and diversity of prior knowledge? The quality and diversity of prior knowledge are ensured by the benchmark selection, diversity, and sampling method. * Selection: We select high-performance architectures from three widely used NAS benchmarks, including NAS-Bench-101, NAS-Bench-201, and NAS-Bench-301. These architectures achieve good performance on well-known image classification datasets such as ImageNet. This allows us to learn how components are comprised in high-performance architectures and apply this knowledge to other search spaces. * Diversity: To illustrate the diversity of selected benchmarks, Figure (_https://anonymous.4open.science/r/PGNAG/prior.png_) visualizes the distribution of $20$ architectures sampled from NAS-Bench-101, confirming that our selected architectures cover a diverse distribution in terms of both accuracy and complexity. * Sampling method: We conduct ablation studies on the number of selected architectures in Table 8 and the selection method (weakness 1 in Reviewer PwDK). The results confirm that PG-NAG utilizes an effective sampling method to generate high-performance architectures. >How does PG-NAG balance computational efficiency and accuracy? PG-NAG addresses this trade-off by using a sigmoid noise schedule and prior knowledge to reduce diffusion steps. Firstly, we conduct experiments on CIFAR-10 in NAS-Bench-201 about how diffusion steps impact performance. Results below show that beyond $1,000$ steps, performance gains are minimal compared to the added computational cost. |time steps|CIFAR-10| |:-:|:-:| |$800$|$94.22$| |$1,000$|$94.36$| |$1,200$|$94.37$| Secondly, to achieve high performance in fewer steps, we use a sigmoid noise schedule for a smoother and more controlled noise removal process. Additionally, by integrating prior knowledge, PG-NAG effectively reduces the search space and enables the diffusion model to focus on high-performance designs without added computational cost. **Reference** [1] Shapley-NAS: Discovering operation contribution for neural architecture search, CVPR'22 --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns well, I acknowledge that the authors' contributions on the efficiency, performance, and their efforts on quality and diversity. Therefore I have increase my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comments and positive feedback. Your insightful comments have substantially enhanced the clarity and overall quality of our paper. We appreciate your time and effort in reviewing our work. Thanks!
null
null
null
null
null
null
Improved Approximations for Hard Graph Problems using Predictions
Accept (poster)
Summary: This paper addresses NP-hard graph problems by developing learning-enhanced approximation algorithms. The authors identify that existing prediction-based approaches predominantly rely on vertex-level information, which may limit performance improvements. To overcome this limitation, they propose a novel framework incorporating edge prediction mechanisms to enhance approximation algorithms. The core technical contribution involves systematically integrating edge prediction information with classical approximation methods through adaptive thresholding strategies. Experimental validation is conducted on two moderately-sized graphs: Facebook and Congress. Results demonstrate consistent performance improvements across varying ε parameters, compared to baselines. Claims And Evidence: Basically yes. However, it would be even better if the authors could address the following concerns of mine. Methods And Evaluation Criteria: The benchmark datasets used in the paper do not reflect the essence of the problem. Firstly, according to the authors' description, the two datasets consist of medium-sized graphs, so the proposed algorithm has not been validated on graphs of other sizes. Secondly, the datasets used in the paper have a long time interval between them, and it seems that there are other datasets available in this field. Please provide a reasonable explanation. Theoretical Claims: Yes. Experimental Designs Or Analyses: 1. This paper only uses two graphs, but it does not explicitly address whether the algorithm remains efficient in terms of both time and space complexity when applied to very large graphs. This point is not clearly discussed. 2. This paper conducts experiments on two moderately-sized graphs, and although the results show that "for both datasets, we demonstrate that for an appropriate ε, our learning-augmented algorithm achieves the best performance on both graphs," it seems that the paper does not provide a clear explanation of how to determine this ε for graphs of different sizes. Supplementary Material: Yes. Relation To Broader Scientific Literature: In the abstract, the authors state that their algorithm builds upon and extends the ε-prediction framework introduced by Cohen‐Addad, d’Orsi, Gupta, Lee, and Panigrahi (NeurIPS 2024). In the introduction, they also mention that 'learning-augmented algorithms have recently emerged as a popular paradigm for beyond worst-case analysis via incorporating machine learning into classical algorithm design,' thereby referencing related literature in this research area. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ## Strengths: 1. This paper introduces a learning-enhanced framework based on edge prediction, which offers a new perspective for improving approximation algorithms for NP-hard graph problems by incorporating predictive information. 2. This paper provides a detailed theoretical analysis, including a series of lemmas and proofs, such as Lemma A.5 to A.9, which lay a solid theoretical foundation for the algorithm's effectiveness and performance guarantees. ## Weaknesses: 1. The algorithm relies on a learning-enhanced framework, suggesting its performance depends on the accuracy of the predictions. The paper doesn't clearly explain how prediction errors are handled and whether they might significantly degrade the algorithm’s performance. Other Comments Or Suggestions: N/A. Questions For Authors: Please answer the two questions in the "Experimental Designs Or Analyses" section as well as the question in "Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and comments. We address the weaknesses they mentioned below. **On datasets and Benchmarks**: The main focus of our paper is on giving rigorous theoretical improvements for classic NP-hard problems. We view our experiments as proof-of-concept, demonstrating that our theoretical ideas are also implementable. (We note that prior work such Cohen-Addad et al. who also studied augmenting NP-hard problems don't have experiments). We also provide some intuition for our experimental choices. For independent-set, we believed it is natural to test on social networks since the problem is optimizing for a collection of nodes with no mutual connections. Thus in our paper, we selected two social networks from the popular SNAP library. We picked moderately sized networks since we wanted to validate the quality of the approximation returned by our augmented algorithm by comparing to the exact optimal solution. However, the independent set is NP hard, so computing the exact optimal requires running an expensive integer linear program, which is prohibitive for large graphs (this is exactly why approximation algorithms for NP hard problems are useful!). Note that to use our algorithm, computing the optimum is not necessary; we only do it to compute our algorithm’s exact approximation factor on real world datasets. In practice, this step can be skipped since we already give a mathematical guarantee bounding the approximation factor. Lastly, **we ran a new experiment on a much larger graph** on a subgraph of large social network from SNAP (https://snap.stanford.edu/data/twitch_gamers.html) with ~50k nodes and ~1.1 million edges (we pruned the original graph to make the integer linear program for finding the optimal feasible). As seen in the figure in this anonymous link (https://ibb.co/60N2W8T1), the qualitative behavior remains the same: our learning-based algorithm can outperform the standard greedy approximation algorithm as well as the algorithm that only uses the predictions. > does not explicitly address whether the algorithm remains efficient in terms of both time and space complexity when applied to very large graphs. All of our algorithms provably run in polynomial time and space so theoretically they are efficient for even very large graphs. This is our main message: by using very noisy predictions, we can get improved approximation algorithms for fundamental optimization problems in polynomial time. Our approximations using predictions overcome existing barriers which without predictions are not possible in polynoial time (assuming P != NP). > the paper does not provide a clear explanation of how to determine this ε for graphs of different sizes > The paper doesn't clearly explain how prediction errors are handled Our guarantees **already have worst case guarantees built in**, even if the predictions are arbitrarily corrupt, in three ways. 1) Our approximation factors consist of two terms: one coming from the classic bounds without predictions and a term $f(\epsilon)$ that is the advantage that we have using edge-predictions that are correct with probability $½ + \epsilon$. (Note $f(\epsilon)$ depends on the problem). We recover the original worst-case guarantees by letting $\epsilon \rightarrow 0$. This corresponds to predictions that are random noise with no signal. On the other hand, our approximation factors improve as $\epsilon$ increases. Thus, our bounds naturally interpolate between the purely noisy case and to the case of large $\epsilon$ where we provably obtain an advantage over no predictions. 2) Even if the $\epsilon$ parameter is not known in practice, we can simply guess over multiple choices of $\epsilon$, run our algorithm, and take the best solution. E.g. in vertex cover, we can instantiate our algorithm for different $\epsilon$ values and take the smallest cover over all choices. This is because the problems we study are in NP, so we can compute the quality of the solution in polynomial time. Since for our theoretical bounds we only need to know $\epsilon$ up to a constant factor, our guess over $\epsilon$ can be done efficiently. That this also handles the reviewer’s other concern about determining $\epsilon$ in practice. 3) Lastly, we can also run another approximation algorithm in parallel to our algorithm, e.g. classic approximation algorithms, and take the best solution (this is because the problems are in the class NP so we can compute the quality of the solution returned by both algorithms). This ensures that we can never output something worse than the classic algorithm. Overall, we thank the reviewer for their feedback. We believe we have addressed the reviewer’s main concerns about how our algorithms scale, as well as how our algorithms handle prediction errors. We are happy to provide additional clarifications and engage further in the discussions if we have misunderstood any crucial points. Many thanks, The Authors.
Summary: This paper introduces a new prediction model that extends the framework of Cohen-Addad et al. (NeurIPS 2024) to improve approximation ratios for NP-hard graph problems, including (weighted and unweighted) Vertex Cover, Set Cover, Max Independent Set, and Max Cut. In their prediction model, each edge is assigned i.i.d. bits that provide $\epsilon$-accurate information about the variables participating in the edge constraint. For instance, in the Vertex Cover and Max Independent Set problems, each edge has two bits indicating whether its endpoints belong to a fixed optimal solution. Each bit is independently correct with probability $1/2+\epsilon$, regardless of the other bit or other edges. Using this prediction model, they achieve improved approximation ratios over classical approximation algorithms (without predictions) and learning-augmented approaches based on alternative prediction frameworks. The main algorithmic insight is to leverage predictions for high-degree vertices—where majority voting yields more reliable estimates—and apply a standard approximation algorithm for low-degree vertices. Claims And Evidence: All the claims are mathematically proved. Methods And Evaluation Criteria: The paper evaluates the performance of its algorithms using the approximation ratio, which is standard in the literature. Additionally, its learning-augmented framework, though novel, is reasonable. Theoretical Claims: I did not check the proofs in the appendices, but the proof sketches in the main body of the paper make sense. Experimental Designs Or Analyses: The experimental setup, baselines, datasets, and performance measure are reasonable. However, it would be better to also report the variance of the results for algorithms that use predictions. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: In Section 3, the authors compare their work with previous classical and learning-augmented algorithms for the studied problems. They also mention hardness results related to these problems. The previous learning-augmented results use different prediction frameworks; I mention two of them here: * Antoniadis et al. (2024) studied the Weighted Vertex Cover under a different prediction model that predicts the optimal set of vertices, and achieved an approximation ratio of $1+\frac{\eta^+ + \eta^-}{OPT}$, where $\eta^+$ and $\eta^-$ are the total weight of the false positive and false negative edges, respectively. In contrast, this work presents an algorithm with approximation factor $2-\Omega(\frac{\log \log 1/\epsilon}{\log 1/\epsilon})$. * Cohen-Addad et al. (NeurIPS 2024) studied the Max Cut problem under $\epsilon$-accurate vertex predictions, and achieved an approximation ratio of $\alpha_{GW}+\tilde{\Omega}(\epsilon^4)$, where $\alpha_{GW}$ is the Goemans-Williamson constant. In contrast, this work uses $\epsilon$-accurate edge predictions and achieves an approximation ratio of $\alpha_{GW}+\tilde{\Omega}(\epsilon^2)$. Essential References Not Discussed: I am not aware of any related works that are essential to understanding the key contributions of the paper but are not currently cited. Other Strengths And Weaknesses: The new prediction model introduced in this paper is interesting, achieves strong results, and improves upon previous results in other models. The proposed algorithms are simple and intuitive, and their ideas may have applications in other related problems. The paper is well-written and easy to follow. In particular, presenting the big picture and the main algorithmic ideas in the introduction was especially helpful. Other Comments Or Suggestions: It would be interesting to see how the algorithms perform if the predictions are not $\epsilon$-accurate but instead come from a machine learning algorithm that does not have access to the current instance. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and comments. > report the variance of the results for algorithms Thank you; we will do this in the final version. > It would be interesting to see how the algorithms perform if the predictions are not eps-accurate Thank you for this question. This is a nice future direction to study. However, we note that even if the $\epsilon$ parameter is not known in practice, we can simply guess over multiple choices of $\epsilon$, run our algorithm, and simply take the best solution. E.g. in the vertex cover example, we can instantiate our algorithm for different $\epsilon$ values and take the smallest vertex cover over all choices.
Summary: The paper studies learning augmented algorithms for hard graph problems. The author introduce a new setting in which the algorithm can count on a prediction algorithm that provides two bits per edge, one per each incident vertex, which are positively correlated to the fact that the vertex satisfies the edge constraint. They show that in this setting algorithms can be designed whose approximation guarantees breaks the hardness barrier (in the absence of the predictions). Moreover they show that this model might be more powerful than the one where one bit per vertex is predicted. ## Update after rebuttal I kept my positive score. The rebuttal phase did not bring any additional information to justify a decrease. Claims And Evidence: The claims are supported by proof of the stated improved bounds. Moreover the authors present a moderate experimental analysis for the maximum independent set problem. Here, I would have expected also a comparison with the algorithm of Braveman et al. Methods And Evaluation Criteria: Yes. The proofs appear to be sound. Theoretical Claims: I checked all the proofs in the body of the paper. I did not verify the appendix. Experimental Designs Or Analyses: See the above comment in Claims and Evidence Supplementary Material: no Relation To Broader Scientific Literature: The paper introduces a new setting for prediction-augmented algorithms. While the previously proposed setting was based on one predicted bit per vertex, here more bits, namely one per incident edge, are predicted per each vertex. The authors discuss both the significance and the positive gap in efficecncy between the new setting the previous one. Essential References Not Discussed: I think the treatment of the related literature is comprehensive Other Strengths And Weaknesses: Strenghts: a new model is proposed and its efficacy and imprevements are proved w.r.t. previous model. A general approach for the new setting is designed that exploits known algorithms for degree-constrained instances. Weaknesses: it is not clear how in practical cases one can make available predictions with the desired bounded reliability. Other Comments Or Suggestions: a few typos and corrections: page 3, lines 1 and 9, column 2: above which --> on which page 3, line 11, column 2 many --> may page 4, line 10 column 1 Minimum --> Maximum Proof sketch of Theorem 4.1 (equation in display) in the left hand side of the inequality: w(S) should be S in the right hand side of the inequality, the first term should talk about S_0 not S_1 Questions For Authors: How do you guarantee the goodness of the prediction? Can you provide experimental comparison with the other prediction-augmented algorithms proposed in the literature, rather than with artificial new prediction algorithms or non-prediction-based algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and comments. > a few typos and corrections: Thank you, we have fixed the typos. > It is not clear how in practical cases one can make available predictions with the desired bounded reliability. > How do you guarantee the goodness of the prediction? Our guarantees **already** have worst case guarantees built in, even if the predictions are arbitrarily corrupt, in three ways. 1) Our approximation factors consist of two terms: one coming from the classic bounds without predictions and a term $f(\epsilon)$ that is the advantage that we have using edge-predictions that are correct with probability $½ + \epsilon$. (Note $f(\epsilon)$ depends on the particular problem studied). We recover the original worst-case guarantees by letting $\epsilon \rightarrow 0$. This corresponds to predictions that are random noise and have no signal. On the other hand, our approximation factors improve as $\epsilon$ increases. Thus, our bounds naturally interpolate between the purely noisy case (where our guarantees converge to worst-case bounds) and to the case of large $\epsilon$ where we provably obtain an advantage over the setting with no predictions. 2) Even if the $\epsilon$ parameter is not known in practice, we can simply guess over multiple choices of $\epsilon$, run our algorithm, and simply take the best solution. E.g. in the vertex cover example, we can instantiate our algorithm for different $\epsilon$ values and take the smallest vertex cover over all choices. This is because the problems we study are in the class NP, so we can compute the quality of the solution in polynomial time. Since for our theoretical bounds we only need to know $\epsilon$ up to a constant factor, our guess over $\epsilon$ can be done efficiently. Note that this also handles the reviewer’s other concern about determining $\epsilon$ in practice. 3) Lastly, we can simply run another approximation algorithm in parallel to our algorithm, e.g. the classic approximation algorithms without predictions, and take the best solution (again this is because the problems are in the class NP so we can compute the quality of the solution returned by both algorithms). This ensures that we can never output something worse than the classic algorithms. > Can you provide experimental comparison with the other prediction-augmented algorithms proposed in the literature. For independent-set (the setting of our experiments) we are only aware of one prior work of Braverman et al. They use a different prediction model (vertex-based predictions) and obtain a substantially worse theoretical approximation factor (their approximation factor can be $\sqrt{n}$ whereas we get a constant approximation). The authors did not provide an implementation of their algorithm. Nevertheless, we tested their algorithm on the two datasets in our main submission. If we implement their algorithm as it is written in their paper, then their algorithm just converges to the standard greedy algorithm (dotted green line in our Figures). This is because their algorithm first prunes nodes based on a complicated degree condition (see Algorithm 1 in https://arxiv.org/pdf/2407.11364) and then runs the greedy solution on the pruned graph. However, the constants are quite large in the pruning condition so in the two graphs we tested, none of the nodes were pruned (and we suspect this to be the case for most “real world” graphs). Thus, their algorithm performs their step 4 (compute the greedy solution on the remaining un-pruned nodes) on the entire graph. (More precisely, their condition 2 of including all nodes in L with degrees at most $36 \cdot \log n$ always includes all nodes in our datasets). Perhaps one can optimize their constants to obtain a more reasonable bound, but we did not pursue this. Given this, we believe their algorithm to be mostly of theoretical interest, but it is an interesting direction for future work to devise a more practical version of their algorithm.
Summary: They design algorithms for some fundamental NP-hard graph problems such as (Weighted) Vertex Cover, Set Cover, Maximum Independent Set, and MaxCut when we have some random information about an optimal solution. More precisely, they assume that for each edge, we have two bits for its endpoints regarding whether they are in the optimal solution or not, but this bit does not have the accurate data. Instead, each bit correctly reflects the optimal solution with probability $1/2 + \epsilon$. Their algorithms achieve a better approximation factor than those designed for the standard setting without predictions. Their general idea is that for high degree vertices, decide their status in the solution based on the majority of information we have about them. This makes sense since if the degree of a vertex is $d$, then we have a $d$ sample about whether this vertex is inside the optimal solution or not. As the number of samples increases, the confidence in knowing the vertex's state improves. Finally, they handle low-degree vertices using a simple greedy approach specific to each problem. Claims And Evidence: They provide proof for their claims. Methods And Evaluation Criteria: They provide an evaluation, but only for one of their proposed algorithms. However, this is not a major concern, as the main contribution of the paper lies in its theoretical results. Theoretical Claims: I reviewed the proofs in the main part of the paper, but not the appendix. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: Previous works study the case that for each vertex we have one random bit that gives information about the optimal solution. In contrast, this work assigns two bits per edge, providing significantly more information for high-degree vertices. Essential References Not Discussed: No. Other Strengths And Weaknesses: I am not sure why their model is interesting. Since the predictions are related to vertices, it does not make sense that each vertex has a separate prediction for each adjacent edge. The prior model, where each vertex had a single prediction bit, seems more realistic to me. Additionally, this model provides excessive information about high-degree vertices, which their algorithm takes advantage of. I find the paper's motivation insufficient in justifying the significance of their model. Other Comments Or Suggestions: You mention previous work on vertex-based predictions but do not provide their approximation factors. Including these would facilitate comparison with your results. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and comments. We address the weaknesses they mentioned below. > Their general idea is that for high degree vertices, decide their status in the solution based on the majority of information we have about them ... Finally, they handle low-degree vertices using a simple greedy approach specific to each problem. We agree that our high level idea of separating the vertices into heavy and light degrees is quite intuitive and generalizes across many problems. We view this as a strength of our framework. However, we would like to point out the many technical challenges we need to overcome. Our ‘high degree’ threshold is a constant (depending on $\epsilon$ and independent of the graph size). This alone is not enough to union bound over potentially $O(n)$ high-degree vertices. Thus, there is always a non-negligible chance that a high-degree vertex gets misclassified. This can be very problematic since high-degree vertices can be highly influential in the final solution. For example in vertex cover, a high-degree vertex which is misclassified to not be in the solution can make us add all of its neighbors in the vertex cover, leading to an unbounded competitive ratio if we are not careful. Thus in all of our problems, we need a sophisticated “cleaning step” which not only fixes any misclassifications, but also helps us successfully merge the solution found for low-degree vertices. This is non-trivial since the solution on low-degree vertices may conflict with the solution on high-degree vertices, e.g. for independent set. Our cleaning step is especially subtle for the weighted vertex cover problem,where the weight of a vertex is unrelated to its degree (see the paragraph starting on line 589 for a technical discussion). > I am not sure why their model is interesting. We believe the edge-based prediction model that we introduce is interesting for the following reasons: 1) Our model gives much stronger theoretical results compared to vertex predictions. For example in Max Cut, vertex predictions in Cohen-Addad et al. give $\approx \epsilon^4$ additive approximation advantage over the best classical approximation whereas edge predictions give $\approx \epsilon^2 \gg \epsilon^4$ advantage. The difference is much more pronounced for independent set where we can get a constant factor approximation, whereas prior work Braverman et al. can only guarantee a $O(\sqrt{n})$ factor approximation with vertex predictions. For vertex cover (weighted and unweighted), we can get strictly smaller than a factor of 2 approximation whereas the vertex prediction based algorithm of Antoniadis et al. has an approximation factor depending on the number of predicted false positives and negatives, which can lead to an unbounded approximation ratio. 2) Our results do not require i.i.d. predictions across edges. Rather, 4-wise independence of the predictions suffices (see our Remark 4.6). This means that our algorithms can handle potentially a huge number of correlations among the predictions. 3) Lastly, while our work is the first to introduce edge predictions for augmenting NP-hard problems, we remark that edge based predictions have also been used in other learning-augmented optimization problems (unrelated to NP-complete problems), e.g. [1] for correlation clustering and [2] for metric clustering. - KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals. ICLR ‘23 - Metric Clustering and MST with Strong and Weak Distance Oracles. COLT ‘24. > You mention previous work on vertex-based predictions but do not provide their approximation factors. Including these would facilitate comparison with your results. Please see the first point of our response above. We also note that our submission does contain a thorough discussion on prior work on vertex-based predictions. See Lines 146-164 (right column) for discussion on prior work on vertex-cover, Lines 174-185 (left column) for independent set, and Lines 186-190 for discussion of prior work on max-cut. Overall, we thank the reviewer again for their feedback. We believe we have addressed the reviewer’s main concern about why our new model is interesting. We are happy to provide additional clarifications and engage further in the discussions if we have missed or misunderstood any crucial points. Many thanks, The Authors.
null
null
null
null
null
null
Hide & Seek: Transformer Symmetries Obscure Sharpness & Riemannian Geometry Finds It
Accept (spotlight poster)
Summary: The paper demonstrates that sharpness metrics on transformers are not a reliable proxy for generalization due to the symmetry properties of the attention mechanism. The author proposes using a Riemannian space, specifically a quotient manifold derived from the symmetry group. Within this space, they introduce a geometric sharpness metric and show that, for diagonal networks, an analytical solution exists. Experimental results on diagonal networks and Vision Transformers (ViTs) reveal a strong correlation between generalization and the sharpness metric. However, the correlation is less significant for language models. Claims And Evidence: The main claim is that classical sharpness is not well-suited for transformers due to specific symmetries in the parameter space. The second claim is that adapting the sharpness calculation to the quotient space, with respect to the symmetry group, improves its correlation with generalization. Both claims are supported theoretically and experimentally. The only reservation is that adaptive sharpness still performs reasonably well for vision models in the experimental results, although the proposed method is significantly better. Methods And Evaluation Criteria: The evaluation follows protocols described in various papers. These methods are applied not only to the limited case of diagonal networks but also to pre-trained ViTs and language models. While deeper experimentation would be beneficial, the proposed method and evaluation criteria are well-founded. Theoretical Claims: read the proofs but did not verify their correctness. Experimental Designs Or Analyses: I checked the validity of the three experiments. The protocols are straightforward and do not exhibit any issues. Supplementary Material: I read the proofs and the examples in the supplementary material. Relation To Broader Scientific Literature: The paper is well-situated within the current literature, and the authors demonstrate a strong understanding of the state of the art in the field. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - The paper is very well written despite the complexity of the domain. - The proposed approach is elegant and general enough to open important perspectives for generalization estimation using sharpness. **Weaknesses:** - The paper frequently refers to the appendix, making it more challenging to read. - The experimental section is convincing but limited in terms of architectures. Other Comments Or Suggestions: - The Table 1 should be in the paper Questions For Authors: - What is the complexity and the computation cost of estimating geodesic sharpness? How does it compare to classical and adaptive sharpness? These questions are not sufficiently addressed in the paper and are not explored in the experimental section. - How sensitive is the approach to different architectures? Only one architecture is considered for each experimental task, which could introduce bias in the results. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We wish to thank the reviewer for their comprehensive review and for their helpful suggestions. **Other Comments Or Suggestions:** > The Table 1 should be in the paper Thank you for pointing this out. We will move it to the main body of the paper. --- **Questions** > Q1: What is the complexity and the computation cost of estimating geodesic sharpness? How does it compare to classical and adaptive sharpness? This is a good question! We will add the following to the paper: Geodesic sharpness does not significantly differ in time complexity from adaptive sharpness (or classical sharpness with a nearly identical time complexity). In our language model experiments, **adaptive sharpness takes 1.5s per step, while our $S_{\text{inv}}$ takes 3s, and our $S_{\text{mix}}$ around 2s per step**. For $S_\text{inv}$, the main additional overhead is inverting a $d_{\text{head}} \times d_{\text{head}}$ matrix and performing a Sylvester solve in SciPy on CPU, as there is no such solve in PyTorch. >Q2: How sensitive is the approach to different architectures? Only one architecture is considered for each experimental task, which could introduce bias in the results. This is indeed a limitation of the experimental setup we inherit from [1]. To facilitate the comparison with [1], we decided to focus on the same architectures that were present there, but this could have a hidden bias. The reason for which this is done in [1] is to always compare models within the same loss surface (albeit at different points). Since the existence of correlation with generalization exists for all tasks we study, we suspect that this is not architecture-dependent, but are conducting further experiments with a broader set of ViT models to determine whether this indeed is the case. --- We hope we have addressed all of your questions and look forward to any further questions and insights that might come up during the discussion period. [1] Andriushchenko, M., Croce, F., Müller, M., Hein, M., and Flammarion, N. A modern look at the relationship between sharpness and generalization. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the precise response—it confirms my positive assessment of the paper. However, I would appreciate a more in-depth comparison with the paper cited by the first Reviewer. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Certainly -- we initially weren't aware of that paper, but will add a discussion about it in the related works section. To summarize, [1] introduces a quotient manifold construction for re-scaling symmetries and then use the Riemannian spectral norm as a measure of worst-case flatness; they validate their approach both on synthetic data (where they check the invariance of their measure) and on real-life data/models: MNIST and CIFAR-10; CNNs as models. The main differences from our approach are as follows: - a) Our approach is more general and can accommodate both the $GL(h)$ symmetry of transformers, and the original re-scaling/scaling symmetry of convolutional/fully-connected networks, rendering it applicable to a wider range of modern architectures; - b) Our experimental setup is more challenging: we test on large-scale models (large transformers vs CNNs) and large-scale datasets (ImageNet vs CIFAR-10). Sharpness measures that account for re-scaling/scaling symmetries (e.g. adaptive sharpness) work quite well on CIFAR-10 and for CNNs and tends to break down on datasets like ImageNet and for transformers; - c) Conceptually, [1] defines worst-case sharpness on the usual norm-ball, appropriately generalized to the Riemannian setting, characterized by $|| \xi|| \leq \rho$ . We propose instead that the ball should be the one traced out by geodesics, to better respect the underlying geometry. In our ablations in appendix G (Figure (7) and Figure (8)) ignoring the geodesic component of our approach corresponds to the middle plots, which have notably lower Kendall-tau correlation values than those obtained by using a geodesic-based sharpness measure. - d) Performance-wise, we believe our approach is more efficient because it does not use the Hessian, and need only to use considerably cheaper gradients. [2] mentions that even in a fully optimized setting, Hessian vector products calculations are at least 2 to 4 times as expensive as gradient calculations, and require between two and three times as much memory. [1] Rangamani, A., Nguyen, N. H., Kumar, A., Phan, D., Chin, S. P., & Tran, T. D. (2021, June). A scale invariant measure of flatness for deep network minima. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1680-1684). IEEE. [2] How to compute Hessian-vector products? https://iclr-blogposts.github.io/2024/blog/bench-hvp/
Summary: The paper proposes to define the sharpness of the loss curve of neural networks via Riemannian geometry in order to account for symmetries in network parameters. While some reparameterization-invariant sharpness measures exist, they do not account for all possible symmetries in parameters, in particular not for the symmetries of attention layers in transformers. The paper provides an instantiation of the proposed geodesic sharpness for this type of symmetry in transformer architectures. The paper finds that geodesic sharpness correlates stronger with generalization than the previously proposed adaptive sharpness measure. Claims And Evidence: The paper's derivations of geodesic sharpness are sound. The experiments show a superior correlation of geodesic sharpness with generalization, compared to adaptive sharpness. Although the reparameterization-invariance follows from the derivation of geodesic sharpness, it would have been nice to empirically verify this, as well. Methods And Evaluation Criteria: The evaluation criteria are sound, geodesic sharpness is compared to adaptive sharpness in terms of its correlation with generalization for diagonal networks and transformers. The empirical evaluation could be a bit more comprehensive in terms of architectures, benchmark datasets and baseline generalization measures, but since the focus of the paper is on the theoretical contribution, I find the amount of experiments adequate. Theoretical Claims: I have checked the theoretical claims and proofs. The proofs are presented clearly, up to my limited understanding of Riemannian geometry. Both claims and proofs are sound. Experimental Designs Or Analyses: The experimental design is sound. The result that sharpness is negatively correlated with generalization for vision transformers is curious. A potential explanation might be the requirement of locally constant labels suggested by the analysis in [1]: their analysis suggests that sharpness only correlates with generalization if labels in representation space can be assumed to be locally constant, i.e., small perturbations of the representation do not change the true label ($P(y|x)\approx P(y|x+\xi)$). If for the vision transformer small perturbations in the representation should lead to strong changes in the true label, then one would expect a negative correlation between sharpness and generalization. Of course, this argument holds for relative sharpness, and thus might not be true for geodesic sharpness. [1] Petzka, Henning, et al. "Relative flatness and generalization." Advances in neural information processing systems 34 (2021): 18420-18432. Supplementary Material: I have reviewed the supplementary material. The additional ablation study, the explanation of basic concepts and discussion on the algorithm to compute geodesic sharpness (in particular the complexity analysis) are sound and valuable. The proofs are presented clearly, up to my limited understanding of Riemannian geometry. Relation To Broader Scientific Literature: While Maksym Andriushchenko and his co-authors have shown that many sharpness measures do not correlate well with generalization (in particular the SAM-based ones), relative flatness [1] appears to work better, also with transformers, although no direct comparison has been made - to the best of my knowledge. That is, regularizing with relative flatness improves generalization also for transformers [2] and its behavior wrt. adversarial examples is similar-ish for CNNs and transformers [3]. Since computing it for the penultimate layer and the CE loss is very efficient [3], it would be interesting to discuss geodesic sharpness wrt. relative sharpness. [2] Adilova, Linara, et al. "FAM: Relative Flatness Aware Minimization." Topological, Algebraic and Geometric Learning Workshops 2023. PMLR, 2023. [3] Walter, Nils Philipp, et al. "The uncanny valley: Exploring adversarial robustness from a flatness perspective." arXiv preprint arXiv:2405.16918 (2024). Essential References Not Discussed: The paper discusses essential references, to the best of my knowledge. Other Strengths And Weaknesses: The paper tackles an important issue in research on the relationship between sharpness of the loss surface and generalization, namely that of symmetries in parameter space. The contributions are sound and original. While more empirical evaluation is required to show the practical significance of the proposed geodesic sharpness, its significance from a theoretical side is solid. Other Comments Or Suggestions: I have no further comments. ######### After Rebuttal ############ I maintain my positive assessment and recommend acceptance. Questions For Authors: Q1: Have you considered measuring average geodesic sharpness, as well? Q2: Are the symmetries in [4] only instances of scaling and re-scaling? [4] Petzka, Henning, Martin Trimmel, and Cristian Sminchisescu. "Notes on the symmetries of 2-layer relu-networks." Proceedings of the northern lights deep learning workshop. Vol. 1. 2020. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We wish to thank the reviewer for their thoughtful review and their really interesting suggestions. We had not fully considered the possible connections with relative flatness, but find these to potentially be a really fruitful avenue of research. **Claims And Evidence:** >Although the reparameterization-invariance follows from the derivation of geodesic sharpness, it would have been nice to empirically verify this, as well. This is a good suggestion, and we will add this to any final versions in an appendix (similar to Figure 1 in Andriushchenko et al.). **Experimental Designs Or Analyses:** > The result that sharpness is negatively correlated with generalization for vision transformers is curious. [...] Thank you for this great observation! This could quite possibly help explain this curious phenomenon. One possible way to test for this (at least on synthetic datasets) would be to use a similar experimental setup to that used in [1], modified for classification with diagonal networks, where we can artificially control local label constancy through class separation. More broadly, and with an eye to possible further experiments, if the reviewer happens to be aware of any possibly useful approximations to the local constancy of labels, we would welcome any such suggestions. In principle, we need access to the data-generating process details to estimate the labels' local constancy, something we don't have for ImageNet. We also suspect that looking into the data distribution is the most promising future direction for understanding the sign of the correlation flip. For instance, in our synthetic regression task, we observe differing behaviours for the correlation between sharpness and generalization in the under or overparametrized regimes. The data itself can introduce additional symmetries in the overparametrized regime since in this regime $n<d$, where $n$ is the number of data points and $d$ is the dimension, and so the data matrix $X$, which is $n \times d$, always has a non-trivial null space by the rank-nullity theorem. This implies that two predictors $\beta = u \odot v, \beta' = u' \odot v'$ should be equivalent if there is $z \in Null(X)$ s.t. $\beta=\beta'+z$. In the underparametrized regime, where $n>d$, this is no longer necessarily the case, and we expect the null-space to be trivial, thus making these symmetries disappear. We are unsure at this moment of what concrete impact these additional symmetries have but we intend to investigate them further. More broadly, data-dependent symmetries are already known in the literature (e.g. [2]), but remain, in our opinion, underexplored. **Relation To Broader Scientific Literature:** > That is, regularizing with relative flatness improves generalization also for transformers [2] and its behaviour wrt. adversarial examples is similar-ish for CNNs and transformers [3]. Since computing it for the penultimate layer and the CE loss is very efficient [3], it would be interesting to discuss geodesic sharpness wrt. relative sharpness. Thank you for the suggestion. We agree it would be interesting to add this to the paper, and we'll endeavour to do so in any final version. --- **Questions** > Q1: Have you considered measuring average geodesic sharpness, as well? This is a good question! We have considered it, especially since [3] reports that, at least for diagonal networks, average sharpness can have different correlation signs with generalization (positively correlated vs anti-correlated and vice-versa). We did not present results on it in our paper due to its mathematical complexity: we would need to find a computationally feasible algorithm to properly integrate over the high-dimensional geodesic ball. It was unclear to us whether this was entirely possible, although a Monte Carlo approach might yield results that are "good enough". **double-check** >Q2: Are the symmetries in [4] only instances of scaling and re-scaling? Exactly! --- We hope we have addressed all of your questions and look forward to any further questions and insights that might arise during the discussion. [1] Petzka, Henning, et al. "Relative flatness and generalization." Advances in neural information processing systems 34 (2021): 18420-18432. [2] Zhao, Bo, et al. "Symmetries, flat minima, and the conserved quantities of gradient flow." International Conference on Learning Representations 2023. [3] Andriushchenko, M., Croce, F., Müller, M., Hein, M., and Flammarion, N. A modern look at the relationship between sharpness and generalization. 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. - Regarding the problem of locally constant labels: I have not yet found a good solution for that myself. My main issue is that it seems that only a constant label in the representation is necessary, so even ensuring locally constant labels in the input distribution (e.g., via a synthetic dataset) does not guarantee locally constant labels in the representation. If using the penultimate layer, though, one might test for neural-collapse style clustering, since this implies - at least empirically - that labels are locally constant in representation space. - I agree that investigating data-dependent (and maybe even data-distribution-dependent) symmetries is a very interesting direction for future research. I can only further encourage you to follow that path. I maintain very positive about this paper and keep my rating.
Summary: The paper introduces geodesic sharpness, a novel adaptive sharpness measure defined on a quotient manifold that factors out the rich symmetries in transformer parameter spaces (notably the high-dimensional GL(h) symmetry in attention). By leveraging Riemannian geometry, the authors redefine perturbation norms and paths (using geodesics) so that the measure is invariant to symmetry-induced redundancies. They show that when geodesic sharpness is approximated beyond first order, it recovers strong correlations with generalization, in contrast to traditional adaptive sharpness measures. Claims And Evidence: The main claims are that (a) existing sharpness measures fail in transformers due to unaddressed parameter symmetries, and (b) by reinterpreting sharpness on the quotient manifold, one can obtain a measure (geodesic sharpness) that correlates strongly with generalization. The evidence includes rigorous derivations for simple diagonal networks and empirical evaluations on vision transformers (fine-tuned CLIP) and language models (BERT fine-tuned on MNLI) where Kendall’s tau correlations are consistently stronger for geodesic sharpness. Overall, the claims are well supported, though the variability in the sign of the correlation across tasks suggests further investigation is warranted. Methods And Evaluation Criteria: The method reformulates the sharpness measure by defining the quotient manifold induced by network symmetries, lifting Euclidean objects to their Riemannian counterparts, and approximating geodesic paths to measure worst-case loss variation within a geodesic ball. Evaluation is based on the correlation between the sharpness measure and generalization gap across controlled toy experiments and real-world transformer settings. The approach is conceptually sound, though practical geodesic approximations may require careful tuning. Theoretical Claims: While the derivations are largely convincing, the reliance on approximations and assumptions about the quotient manifold’s structure in complex networks are points that could benefit from further clarification. Experimental Designs Or Analyses: The experimental setup is well thought out. Supplementary Material: I did a quick read of the supplementary material. Relation To Broader Scientific Literature: The work extends the literature on generalization by connecting sharpness measures with the geometry of parameter space. By addressing higher-dimensional symmetries inherent in transformer architectures, it bridges a gap between geometric approaches in optimization and practical issues in modern deep learning. Essential References Not Discussed: There are no missing references to my knowledge. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: A more detailed ablation study on the effect of metric choice (invariant vs. mixed) could help clarify when one might be preferred over the other. Expanding the discussion on the conditions under which geodesic sharpness may flip its correlation sign would benefit practitioners. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and their suggestions for improving the paper. We'll endeavour to include as much as possible in any final version. **Theoretical Claims** > While the derivations are largely convincing, the reliance on approximations and assumptions about the quotient manifold’s structure in complex networks are points that could benefit from further clarification. Thanks for pointing this out! We'll include a more detailed discussion on this in future versions. **Other Comments Or Suggestions** > A more detailed ablation study on the effect of metric choice (invariant vs. mixed) could help clarify when one might be preferred over the other. This is indeed a philosophically interesting question. From a theoretical perspective, we do not have any reason to prefer one metric over another as long as both of them correctly reflect symmetries. In practice, the metrics perform very similarly, with the mixed metric tending to perform slightly better and being faster to run. The mixed metric's numerics are also advantageous as we do not need to solve a Sylvester equation to project into the horizontal space. >Expanding the discussion on the conditions under which geodesic sharpness may flip its correlation sign would benefit practitioners. Thank you for this important question! This is something we're keen on investigating in future work, as it is one of the main factors limiting our approach's utility during training. We believe this will require significant research and taking into account more aspects, e.g. the data distribution (e.g. hypothesized by Reviewer hcv3), which are currently not considered in our framework that purely focuses on parameter space symmetries. --- We hope we have addressed all of your concerns and look forward to discussing any outstanding concerns in the discussion period.
Summary: This paper investigates the connection between sharpness and generalization for models with self attention layers by properly accounting for symmetries present in the models. The authors consider the quotient manifold of parameters and measure sharpness within a geodesic ball on the quotient manifold. The paper claims to introduce the application of Riemannian geometry to deep network parameter symmetry, the notion of geodesic sharpness, and solve for geodesic sharpness in diagonal networks and measure it empirically in transformers. Claims And Evidence: The theoretical and empirical results for diagonal networks seem contradictory. The analytical derivation suggests that when the estimated predictor is close to the optimal predictor, the sharpness should be small (Since $S \propto \|\| \beta_0 - \beta_* \|\|_2$). The empirical results show instead that larger sharpness is correlated with smaller test error, which means the estimated predictor is not close to the optimal predictor. Something does not seem right here. The empirical results in image transformers and language models seem contradictory. Sharpness is correlated with better performance for image transformers and anti-correlated for LMs. Is this actually a meaningful correlate of generalization? Methods And Evaluation Criteria: The authors discuss LoRA adapters but do not conduct experiments for this scenario. The geodesic sharpness measure proposed by the authors could also be adapted to Residual networks that contain GL(h) symmetries within a residual block. Including these experiments would make the paper stronger. Theoretical Claims: Skimmed through the Diagonal networks sharpness derivation, seems correct to me though I did not check details. Experimental Designs Or Analyses: No apparent issues. Supplementary Material: Appendices A, C, E, I primarily. Relation To Broader Scientific Literature: This paper proposes a quotient manifold and metric for GL(h) symmetries in deep network models that go beyond rescaling symmetries. This claim of theirs is accurate. Essential References Not Discussed: The authors claim that they are the first to apply Riemannian quotient manifolds to study deep network parameter symmetry. This was already done in a prior paper [1] that proposes a quotient manifold (along with a metric) for rescaling symmetries in MLPs/CNNs. The authors do not cite or discuss the relationship of their paper to this one. [1] Rangamani, A., Nguyen, N. H., Kumar, A., Phan, D., Chin, S. P., & Tran, T. D. (2021, June). A scale invariant measure of flatness for deep network minima. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1680-1684). IEEE. Other Strengths And Weaknesses: While the paper is easy to follow, it is unclear what the implications of its findings are. Are we able to leverage geodesic sharpness during optimization to find better minima? Can we prove tighter generalization bounds? What is the time complexity of finding the geodesic sharpness? Why do they choose the algorithm in appendix C instead of a Riemannian Hessian based measure? How do the two compare? Other Comments Or Suggestions: None Questions For Authors: 1. Can you reconcile your empirical and theoretical findings for the case of diagonal networks? I am confused how larger sharpness means better test loss 2. Please consider citing [1] since it already introduces a Riemannian quotient manifold of NN parameters albeit for rescaling symmetries. 3. What is the relationship between sharpness and generalization you can report? Your results seem to contradict the conventional wisdom that flat minima generalize better. [1] Rangamani, A., Nguyen, N. H., Kumar, A., Phan, D., Chin, S. P., & Tran, T. D. (2021, June). A scale invariant measure of flatness for deep network minima. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1680-1684). IEEE. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful review and helpful suggestions for improvement. We will make sure to mention the reference [1] that you brought to our attention, and we will contrast it to our paper which goes beyond the re-scaling symmetry. Please find our answers to your remaining concerns below. **Q1** > Can you reconcile your empirical and theoretical findings [...] Thank you for pointing this out! There is indeed some context missing for it to fully make sense, but these findings are not contradictory. **TL;DR: The theoretical derivation (Equation (13)) assumes the underparameterized regime (less parameters than data), while the empirical results (Figure (3)) are for the practically more relevant overparameterized regime (more parameters than data), where deriving a closed-form is intractable.** Here are the details: - **Theoretical assumptions break down.** The theoretical expression (Equation (13)) assumes that $X^\top X = I_{d\times d}$ with data matrix $X \in \mathbb{R}^{n \times d}$ (number of data $n$ and parameters $d$). For this to be feasible, $n\geq d$, i.e. we are in the underparameterized regime. This assumption allows us to derive closed-form expressions for the unique optimal predictor and its sharpness (see, e.g. [2]). Unfortunately, analyzing the practically more relevant overparameterized regime exactly is intractable. - **We can reconcile empirical results with the theoretical prediction.** If we re-run the experiment from Figure (3) (overparameterized regime) in the **underparameterized regime**, we indeed obtain the positive correlation between sharpness and generalization, **as predicted by the theory in Equation (13)**. We believe it is practically less interesting because large models typically operate in the overparameterized regime. - **Our presented empirical findings are consistent with other works.** The findings we report for overparameterized diagonal networks in the original submission agree with findings from other works. The anti-correlation of worst-case sharpness with generalization was also found in [3]. --- **Q2** > What is the relationship between sharpness and generalization you can report? Thank you for this important question! What we can report is that contrary to [3], once we account for symmetry, **there is a relationship between sharpness and generalization, and the Kendall-tau significantly differs from zero**. This is what we show in our experiments and we will make sure to make this more explicit in the text. We believe that answering how the different correlation signs come into being is beyond the scope of our paper. The full story is more complicated and rightfully deserves future investigation: Based on very recent insights from [4] which reports that sharpness minimization differs significantly between vision and language tasks, we hypothesize that the data distribution could play a significant role. Also, Reviewer hcv3 pointed out one interesting avenue to understanding this change in correlation sign via stability of the labels that we hope to follow up on in the very near future. **Other strengths and weaknesses** >While the paper is easy to follow, it is unclear what the implications of its findings are [...]? Our findings provide a foundation for future practical applications. Understanding the sign of the correlation between sharpness and generalization is the limiting factor, but once that is done, we expect geodesic sharpness to be useful for regularizing training. > What is the time complexity [...]? This is a good question! - **Computational cost:** Geodesic sharpness does not significantly differ in time complexity from adaptive sharpness. **In our language model experiments, adaptive sharpness takes 1.5s per step, while our $S_{\text{inv}}$ takes 3s, and our $S_{\text{mix}}$ takes around 2s**. - **Why we do not consider the Riemannian Hessian:** Mainly for reasons of computational efficiency. We deal with models with upwards of $100 M$ parameters, and even accessing quantities such as the Hessian trace through multiple Hessian-vector products is computationally expensive (this was already the case in [3]). --- **References:** [1] Rangamani, A., Nguyen, N. H., Kumar, A., Phan, D., Chin, S. P., & Tran, T. D. (2021, June). A scale invariant measure of flatness for deep network minima. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1680-1684). IEEE. [2] Roger Grosse (2022). "A Toy Model: Linear Regression". University of Toronto, Topics in Machine Learning: Neural Net Training Dynamics. [3] Andriushchenko, M., Croce, F., Müller, M., Hein, M., and Flammarion, N. A modern look at the relationship between sharpness and generalization. 2023. [4] Sidak Pal Singh, Hossein Mobahi, Atish Agarwala, and Yann Dauphin. Avoiding spurious sharpness minimization broadens applicability of SAM. arXiv preprint arXiv:2502.02407, 2025.
null
null
null
null
null
null
AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration
Accept (poster)
Summary: The authors claim that existing methods for accelerating video DiT sampling often rely on expensive fine-tuning or exhibit limited generalization capabilities. To this end, the authors propose a training-free and model-agnostic method to accelerate video DiTs. Specifically, the authors decouples sequence length reduction between attention features and allows the reduction scheduling to adaptively distribute reduced computations across blocks and denoising timesteps. In addition, the authors introduce a matching cache mechanism to minimize matching overhead. The authors provide extensive experimental validation. ## update after rebuttal The authors have addressed the majority of my concerns. I therefore maintain my positive rating. Claims And Evidence: The author's claims are clear and provide sufficient theoretical support. Methods And Evaluation Criteria: The proposed method and evaluation datasets used by the authors are reasonable. Theoretical Claims: I have carefully checked the correctness of the proofs for theoretical claims and found no relevant problems. Experimental Designs Or Analyses: I have carefully checked the soundness/validity of any experimental designs and analyses, and there are the following problems: (1) The authors mentioned a variety of Prior Reduction Methods in the paper (such as Bolya & Hoffman, 2023; Li et al., 2024; Kahatapitiya et al., 2024a), but only compared ToMe in the experimental analysis. More experimental comparisons are needed to verify the effectiveness of the paper's methods. (2) In Fig. 2, the author conducted a comparative experiment on Q in shallow/medium/deep and early/late. I want to know how K and V perform in these positions. Supplementary Material: I have carefully reviewed the supplementary material (example videos) provided by the authors. Relation To Broader Scientific Literature: The paper provides new improvement ideas for existing methods and provides new potential for improving the actual generation efficiency of video DiTs. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: (1) The authors mentioned a variety of Prior Reduction Methods in the paper (such as Bolya & Hoffman, 2023; Li et al., 2024; Kahatapitiya et al., 2024a), but only compared ToMe in the experimental analysis. More experimental comparisons are needed to verify the effectiveness of the paper's methods. (2) In Fig. 2, the author conducted a comparative experiment on Q in shallow/medium/deep and early/late. I want to know how K and V perform in these positions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer **fiPH** for the valuable questions and comments. For the concerns and questions, here are our responses, along with supplementary figures and tables available at https://anon0728.github.io/icml-230-supplementary: **Q1**: The authors mentioned a variety of Prior Reduction Methods (such as Bolya & Hoffman, 2023 [1]; Li et al., 2024 [6]; Kahatapitiya et al., 2024a [7]) in the paper, but only compared ToMe [1] in the experimental analysis. More experimental comparisons are needed to verify the effectiveness of the paper's methods. **A1**: We thanks for the comment. While various prior reduction methods have been proposed, they are not directly designed for text-to-video diffusion models. Some approaches [1, 2, 3] target discriminative tasks such as image classification, while others [4, 5] focus on autoregressive generation in NLP. These methods typically truncate sequence lengths, making them incompatible with diffusion denoising tasks, which require the input and output sequence lengths to remain identical, as discussed in Sec 2.3. Li et al. [6] and Kahatapitiya et al. [7], on the other hand, focus on adapting image diffusion models for video editing tasks. However, their task setups and model designs differ significantly from native video diffusion models, making direct comparison inappropriate. Sequence length reduction for accelerating video diffusion remains relatively underexplored, despite its high potential as demonstrated in our work. Therefore, we primarily compare against ToMe, the only applicable baseline—even though it was originally proposed for image diffusion. Additionally, we evaluate and integrate our method with other acceleration techniques, such as the step-distilled FastVideo method (in Sec 4). We further demonstrate compatibility with caching-based methods through additional experiments in the **A3** of our response to Reviewer **wNPy**, which show that AsymRnR can provide further acceleration when combined with such techniques. **Reference** [1] Bolya, Daniel, and Judy Hoffman. "Token merging for fast stable diffusion." CVPR, 2023. [2] Koner, Rajat, et al. "Lookupvit: Compressing visual information to a limited number of tokens." ECCV, 2024. [3] Rao, Yongming, et al. "Dynamicvit: Efficient vision transformers with dynamic token sparsification." NIPS, 2021. [4] Leviathan, Yaniv, Matan Kalman, and Yossi Matias. "Selective attention improves transformer." preprint, 2024. [5] Xiao, Guangxuan, et al. "Duoattention: Efficient long-context llm inference with retrieval and streaming heads." preprint, 2024. [6] Li, Xirui, et al. "Vidtome: Video token merging for zero-shot video editing." CVPR. 2024. [7] Kahatapitiya, Kumara, et al. "Object-centric diffusion for efficient video editing." ECCV, 2024. --- **Q2**: In Fig. 2, the author conducted a comparative experiment on Q in shallow/medium/deep and early/late. I want to know how K and V perform in these positions. **A2**: Thank you for the comment. This observation also motivated our curiosity during the analysis, and we conducted a similar analysis on the $K$ and $V$. Since the analysis is based on random perturbation and $K$ and $V$ tokens are one-to-one corresponding, perturbing $K$ or $V$ yields equivalent effects. The feature-type sensitivity of matching-based (non-random) reduction is further analyzed in Sec 4.3. The performance trends for $K$ and $V$ mirror those of $Q$ mentioned in Sec 1, but the degradation is less obvious: 1. Perturbations in later blocks result in greater performance degradation. 2. Perturbing early timesteps primarily affects semantic accuracy, while perturbing later timesteps degrades visual details. Qualitative results are provided in the **supplementary Fig 6**. To improve readability and due to page constraints, we present only the main analysis results in the **main manuscript Fig 2**. We will include the full analysis in the revision. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for addressing all of my concerns. I have no further questions and believe this work makes a clear contribution to the community. I therefore maintain my positive rating. --- Reply to Comment 1.1.1: Comment: Dear reviewer fiPH, We would like to express our sincere gratitude to you for acknowledging our work and providing constructive suggestions. Many thanks for the time and effort you took to review our work. The Authors
Summary: This paper studies the importance of different components in video DiTs and proposes Asymmetric Reduction and Restoration (AsymRnR) as a plug-and-play approach to accelerate video DiTs based on previous findings. Experiments on multiple open-source video generation models demonstrate the effectiveness of the proposed method. ## update after rebuttal The additional experiments of u-net based structures and integration with existing speedup methods address most of my concerns. Therefore, the rating is updated. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: No. Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental designs and analyses. Seems fine to me. Supplementary Material: Yes, I have gone through the supplementary material. Relation To Broader Scientific Literature: This work shares similar motivation with ToMe, i.e., reducing the tokens to enable efficient inference, but exhibits some improvement on DiT-based video generation models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper provides analysis on the importance of different components in video DiT and provides several key insights which may benefit latter works when designing the methods. 2. The authors provide extensive analysis on different open-source video generation models and qualitative / quantitative evaluation validates the effectiveness of the proposed method. 3. The paper is well-organized and the writing is clear. Weaknesses: 1. While the proposed method has shown some improvement over the baseline method ToMe, ToMe is original proposed on U-Net based architectures and the comparisons are only conducted on DiT based models. It can be explained by that AsymRnR could only work on transformer structure which contains the designs of Q, K, V which has certain limitation in the generalization ability. 2. Although the proposed method can boost the efficiency of current video generation models to some extend, the improvement in efficiency seems limited compared to other lines of methods, such as distillation methods or feature caching techniques, which could achieve beyond 10x acceleration. 3. The proposed method seems to be an improved version of ToMe based on the empirical findings in video DiT models, the technical contribution needs further justification. Other Comments Or Suggestions: N/A Questions For Authors: 1. It is noted that the authors choose different models of CogVideoX in Tab. 1 and Tab. 2, is there a specific reason for this experimental setup? 2. Shown in Tab. 2, the proposed method could result in improvement in VBench scores at certain cases and it is suggested to provide some justification why this could happen. One possibility is that it is caused by the variance of VBench scores, and repeated evaluation may help to eliminate this effect. 3. While the author mentioned that 'Latency is measured using an NVIDIA A100 for CogVideoX variants and an NVIDIA H100 for the rest of models', it would be better to provide some justification on this setup. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer **Mipk** for the valuable questions and comments. For the concerns and questions, here are our responses, along with supplementary figures and tables available at https://anon0728.github.io/icml-230-supplementary: **Q1**: The authors choosed different models of CogVideoX in Tab 1 and Tab 2. Is there a specific reason for this experimental setup? **A1**: Thank you for the comment. CogVideoX 2B and 5B are distinct models with different architectures. We include both to comprehensively evaluate the effectiveness of AsymRnR. As clarified in **P6 (left, line 322-326)**, ToMe is only compatible with CogVideoX 2B and cannot be applied to other models, such as CogVideoX 5B and HunyuanVideo. In contrast, AsymRnR is compatible with all these models. Therefore, the comparison with ToMe is presented in **Tab 1**, while results on the other models are reported in **Tab 2**. --- **Q2**: Shown in Tab 2, the proposed method could result in improvement in VBench scores at certain cases and it is suggested to provide some justification why this could happen. One possibility is that it is caused by the variance of VBench scores, and repeated evaluation may help to eliminate this effect. **A2**: We agree that variance exists in VBench results, as is common in generative benchmarks. However, the experiments on VBench has already includes over 950 text prompts, and for each prompt, 5 videos have already been generated to mitigate the impact of randomness. Another possible explanation is the inherent redundancies in the overparameterized models, which may introduce minor negative effects. AsymRnR prunes these redundancies, potentially leading to slight improvements. This hypothesis is empirically supported by **Tab 1 and 2**, where AsymRnR exhibits minimal degradation in larger models with higher FLOPs—and in some cases, performance gains. **The supplementary Fig 1** also show cases where AsymRnR improves the baseline generation. --- **Q3**: It is mentioned that 'Latency is measured using an NVIDIA A100 for CogVideoX variants and a H100 for the rest of models', it would be better to provide some justification on this setup. **A3**: Thank you for the question. Both the baseline models and AsymRnR impose no constraints on the underlying hardware. The experiments were conducted on different devices purely due to the availability of hardware at the time. --- Additionally, we would like to clarify a few points raised by Reviewer **Mipk**. --- **Q4**: AsymRnR could only work on transformer structure. It has certain limitation in the generalization to UNet backboned video diffusion models. **A4**: AsymRnR is designed to operate on attention layers, which are commonly present in diffusion models—including UNet models such as Stable Diffusion. Due to space constraints, we refer the readers to our **A4** response to Reviewer **LhNZ**, which includes additional experiments on the UNet-based video diffusion, AnimateDiff. --- **Q5**: The improvement in efficiency seems limited compared to other lines of methods, such as distillation methods or feature caching techniques, which could achieve beyond 10x acceleration. **A5**: To the best of our knowledge, open-sourced step-distilled video DiTs (eg, FastVideo) can achieve approximately 5× speedup but require substantial training resources. Feature caching methods generally yield around 1.3× acceleration in video DiTs, as shown in **A3** response to Reviewer **wNPy**—on par with AsymRnR. Additionally, AsymRnR is complementary to these methods and can be integrated with them for additional acceleration. - The integration with the step-distilled FastVideo method is presented in Sec 4, achieving a total **6.18× speedup over HunyuanVideo**. - Integration with the caching-based method PAB results in a **1.71× speedup** without visible distortions. We also refer Reviewer **Mipk** to response A3 to Reviewer **wNPy** for detailed results. --- **Q6**: The proposed method seems to be an improved version of ToMe based on the empirical findings in video DiT models. **A6**: Both ToMe and AsymRnR accelerate through reducing the number of tokens. However, ToMe relies heavily on heuristic designs and lacks theoretical foundation (eg, the use of cosine similarity). In contrast, we provide a theoretical justification for the matching-based reduction methods in **Corollary 3.1**, which directly informs our design choices (eg, the matching metric in Tab 5). Furthermore, inspired by the QKV-specific behavior from our exploration, we propose several key components: asymmetric strategy (Sec 3.3), scheduling mechanism (Sec 3.4), and the matching cache (Sec 3.5). Together, our theoretical insights, empirical analysis, and extensive experiments drive the design of AsymRnR and lay the groundwork for future research across a broader range of token reduction methods.
Summary: The paper presents AsymRnR, a method to accelerate video DiTs without requiring retraining. It exploits the variability in redundancy among different feature tokens across various model blocks and denoising steps. By asymmetrically reducing the computational load during the attention operations, AsymRnR achieves speedups with small loss in output quality. It integrates seamlessly with existing state-of-the-art DiT architectures, enhancing their efficiency across multiple benchmarks. Claims And Evidence: The paper makes several key claims, which are well-supported by theoretical analysis and experimental results: 1. Claim: AsymRnR provides significant speedup in video DiTs without retraining. Evidence: Experiments on multiple SOTA models show 24–30% reduction in latency while maintaining high perceptual quality. 2. Claim: The asymmetric reduction strategy improves efficiency while minimizing quality loss. Evidence: Ablation studies demonstrate that reducing Q tokens too aggressively degrades quality, while reducing K&V is more forgiving. The asymmetric approach balances quality and efficiency better than prior uniform reduction methods (e.g., ToMe). 3. Claim: Matching cache reduces computational overhead while maintaining accuracy. Evidence: Ablations in Table 4 show that increasing cache steps significantly reduces latency (from 134s to 118s) with only minor quality degradation. Methods And Evaluation Criteria: 1. The benchmark datasets and evaluation metrics are appropriate for assessing video generation quality and efficiency. 2. The method is tested on multiple state-of-the-art DiT architectures, and follows standard evaluation protocols for video generation. Theoretical Claims: 1. The KL divergence-based analysis (Corollary 3.1) is mathematically sound and provides a formal justification for token reduction strategies. 2. The derivation of the Monte Carlo estimator for KL divergence is based on prior work (Wang et al., 2009) and appears correct. 3. The discussion on token similarity metrics (dot product vs. Euclidean distance) is insightful and supported by empirical findings. Experimental Designs Or Analyses: Potential concerns: 1. The paper does not explicitly discuss the worst-case computational overhead introduced by matching cache and dynamic scheduling. 2. The choice of similarity thresholds for reduction scheduling is not well-explained—how were these hyperparameters tuned? Supplementary Material: The supplementary material includes additional qualitative results and ablations, which further support the claims. Additional visual comparisons between AsymRnR and baselines (CogVideoX-2B, CogVideoX-5B, etc.) are useful. Relation To Broader Scientific Literature: The paper builds on prior token reduction techniques (e.g., ToMe (Bolya & Hoffman, 2023)) but extends them to video DiTs with asymmetric scheduling and caching. Connections to diffusion model acceleration (e.g., distillation approaches like InstaFlow (Liu et al., 2024)) are discussed, highlighting how AsymRnR differs by being training-free. The work is related to efficient attention mechanisms (e.g., Linformer, Performer) but is more specialized for diffusion models. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. No fine-tuning required, making the method easily adaptable. 2. The aymmetric design is reasonsable with theoretical analysis. 3. Experiments conducted on various SOTA models have demonstrated strong performance. The ablation study is well-conducted. 4. The overall writing is well and logical flow is clear. Weakness: 1. Hyperparameter sensitivity: Similarity thresholds for reduction scheduling lack clear explanation. 2. Computational overhead of matching cache should be discussed more explicitly. 3. Limited discussion on worst-case performance—are there cases where AsymRnR degrades performance? Other Comments Or Suggestions: 1. The paper could clarify the trade-offs between speedup and quality in more detail—e.g., when does AsymRnR start degrading output? 2. A qualitative analysis of artifacts introduced by aggressive reduction would be useful. Questions For Authors: 1. How are the similarity thresholds for reduction scheduling chosen? Are they manually tuned per model, or is there an automated selection process? 2. What is the computational overhead of the matching cache? Does it introduce significant latency in some cases? 3. Does AsymRnR ever degrade performance compared to a baseline? If so, under what conditions? 4. Could the method be extended to latent-space DiTs (e.g., Video LDMs)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer **LhNZ** for the valuable questions and comments. For the concerns and questions, here are our responses, along with supplementary figures and tables available at https://anon0728.github.io/icml-230-supplementary: --- **Q1**: How were these hyperparameters (similarity thresholds and reduction rate) tuned? Are they manually tuned per model, or is there an automated selection process? **A1**: Thank you for the comment. The hyperparameters are manually tuned through only a simple and efficient process——typically within 10 iterations, with each iteration requiring only 1 inference. In practive: 1. We start the first iteration with a low similarity threshold of 0.5 and a low reduction rate of 0.3. 2. We run 1 inference with an arbitrary text prompt. If the generation maintains good, we increase the reduction rate by 0.2 to encourage more aggressive reduction. 3. When a poor generation occurs, we revert to the previous reduction rate, lift the threshold by 0.1, and repeat the step 2. Moreover, hyperparameter re-tuning is not always necessary. In practice, we are able to reuse the same hyperparameters across different model architectures and even models employing different diffusion schedulers. Due to space limitations, we kindly refer Reviewer **LhNZ** to our **A3** response to Reviewer **wNPy** for further details. This simple heuristic guides the tuning process with minimal effort. We will include the hyperparameter tuning process in the revision. --- **Q2**: What is the computational overhead of the matching cache? Does it introduce significant latency in some cases? **A2**: The matching cache itself does not introduce additional computation. The complexity of a single matching step is analyzed in Appendix C and depends solely on the video size, which is typically fixed to the training resolution. It doesn’t depend on text prompt or reduction rate. With a matching cache step of $s$, the total matching cost can be further reduced by a factor of $1/s$. In practice, the matching overhead takes approximately **7 seconds** for each generation in the HunyuanVideo experiments (see Sec 4 and Appendix B for more detailed configurations), which is negligible compared to the **over 200 seconds of total acceleration** achieved. --- **Q3**: Does AsymRnR ever degrade performance compared to a baseline? If so, under what conditions? **A3**: Yes, AsymRnR may lead to visible quality degradation. 1. Under aggressive reduction settings, such as extremely low similarity thresholds or high reduction rates, AsymRnR may introduce distortions, pixelation, or blurring. 1. The quantitative analysis for varying reduction rates is provided in Sec 4.3. 2. In addition, the **supplementary Fig 2** shows a qualitative study, where we vary the similarity threshold and reduction rate of HunyuanVideo. Under aggressive reduction settings (eg, similarity threshold 0.6 and reduction rate 0.7), noticeable distortion is observed. 2. Additionally, when the baseline model already produces unsatisfactory outputs (eg, under extremely fast motion or characters, as shown in the **supplementary Fig 3**), AsymRnR may amplify these issues. However, in most cases where the baseline performs well, AsymRnR maintains stable performance without introducing topic-specific degradation. We will include the bad case analysis in the revision. --- **Q4**: Could the method be extended to latent-space DiTs (e.g., Video LDMs)? **A4**: The experiments in Sec 4 show our applications of AsymRnR on latent DiTs such as CogVideoX and HunyuanVideo. AsymRnR can also be extended to UNet-based video diffusion models, as these models include attention blocks where AsymRnR operates. As Video LDM is not open-soruced, we use AnimateDiff to present our extensibility. We apply AsymRnR to the spatial self-attention modules in the highest-resolution stages of AnimateDiff. The corresponding quantitative and qualitative results are presented in the **supplementary Fig 5 and Tab 2.** We achieve 1.20x speedup, with invisable quality degradation. We will include the additional UNet-based experiments in the revision. Note that, UNet models adopt factorized spatiotemporal blocks have worse performance than the full 3D DiTs at the same parameter scale, as shown in the **supplementary Tab 3**. And it has seen less adopted since early 2024. Our work primarily focuses on SOTA video DiTs in the main manuscript. Nonetheless, AsymRnR remains compatible with the legacy UNet-based video diffusion models.
Summary: This paper proposes to asymmetrically reduce the sequence length of attention features to accelerate video DiTs. The proposed approach, called AsymRnR, leverages the observation that different components and stages exhibit varying levels of redundancy. The method introduces a reduction schedule to adaptively distribute reductions across components and a matching cache to enhance efficiency. The authors demonstrate the effectiveness on several video DiTs. Claims And Evidence: Most claims made in the submission are generally well-supported. However, the paper claims that AsymRnR achieves "negligible degradation in output quality" in some cases and even improves it. While the experimental results show high VBench scores and low LPIPS values, some visual cases in Figure.1, 6, and 7 show misaligned motions with the original results. I wonder how to align these results in practice. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem of accelerating video diffusion transformers. Theoretical Claims: The paper includes theoretical analysis to motivate the reduction strategy, specifically through the estimation of KL divergence using Monte Carlo methods. Experimental Designs Or Analyses: Yes. The authors compare AsymRnR with existing token reduction methods and show performance improvements in terms of efficiency and quality. However, the speedups are very limted, at most 1.3x in Tab. Supplementary Material: Yes. The proposed code and the video results. Relation To Broader Scientific Literature: Diffusion models, efficient neural network architectures, and video generation acceleration. Essential References Not Discussed: None Other Strengths And Weaknesses: ## Strengths: * The proposed method of asymmetric reduction is reasonable. * The paper provides theoretical analysis to support the proposed reduction strategy, enhancing the credibility of the approach. * The paper is easy to follow. ## Weaknesses: * Despite maintaining semantic consistency, the generated videos may exhibit visual discrepancies compared to baseline models. * The performance of AsymRnR depends on hyperparameter configurations (e.g., similarity thresholds and reduction rates), which may require tuning for different models. * The acceleration effects shown in Tables 1 and 2 are very limited, **with a maximum speedup of only 1.3 times**. Can the caching method be combined with distillation methods? For example, **can the caching approach be applied on top of a model that has already been accelerated through distillation to achieve further speedup**? Other Comments Or Suggestions: Misalignment of the semantics, the limited speedups and the combination of distillation methods are important for my assesment. I suggest the authors to solve these issues during the rebuttal. Questions For Authors: 1. What is the meaning of (d) in Equation 2? 2. Figures 1, 6, and 7 may alter the semantic content of the video. 3. In Equation 5, (S) needs to be calculated using the formula in Section 3.5. After calculation, how is ($\hat{S}$) determined for \( H \)/\( Q \)/\( K \)/\( V \)? Is it simply the average value? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the Reviewer **wNPy** for the valuable questions and comments. For the concerns and questions, here are our responses, along with supplementary figures and tables available at https://anon0728.github.io/icml-230-supplementary: **Q1**: While maintaining semantic consistency and the experimental results show high VBench scores and low LPIPS values, some visual cases in **Fig 1, 6, 7** show visual discrepancies (eg, misaligned motions) to baseline models. I wonder how to align these results in practice. **A1**: Thank you for the comment. We agree that there are visual discrepancies; however, no unique ground-truth video exists for a given text prompt, multiple generations can be equally satisfactory. Therefore, visual quality and textual alignment (measured by VBench score) are the primary performance metrics. Note that some visual discrepancies occur in cases where AsymRnR produces better results than the baseline model, as illustrated in **supplementary Fig 1**, despite we were not intending to do that. A similar phenomenon is also reported in related works (eg, *Selective Attention Improves Transformer*). We include baseline generations in the figures to demonstrate that AsymRnR achieves comparable visual quality and textual alignment in most cases, without implying that the outputs should be visually identical. We will include the text prompts alongside the figures in the revision to avoid potential misunderstanding. --- **Q2**: The performance of AsymRnR depends on hyperparameter configurations, may require tuning for different models. **A2**: We acknowledge that the hyperparameters (HPs) searching is performed manually; however, it is very efficient—typically within 10 iterations, with each iteration requiring only 1 inference. Due to the shared concern and word limit, we kindly refer you to our response **A1** to Reviewer **LhNZ** for detailed process of HPs searching. Moreover, the HPs are transferable: 1. across text prompts 2. across models with 1. Different architecture: CogVideo-2B HPs are reused for the 5B variant. 2. Different ODE schedulers: FastVideo HPs are transferred to HunyuanVideo. 3. Moreover, when integrating AsymRnR with cached methods (detailed in the **A3** below), we can also maintain the same HPs as in Sec 4 and Appendix B. Notably, other acceleration methods also involve tuning efforts: step distillation approaches require substantial training, and caching-based methods still necessitate model-specific HPs tuning. In comparison, AsymRnR’s HPs tuning is lightweight. --- **Q3**: Can the caching method be combined with distillation methods? For example, can the caching approach be applied on top of a model that has already been accelerated through distillation to achieve further speedup? **A3**: Thank you for the question. - AsymRnR is compatible with step-distilled models such as FastVideo, as discussed in Sec 4. Notably, despite FastVideo being a distilled 6-step video generation model (5x speed up vs the 30-step HunyuanVideo), AsymRnR achieves a 1.24× further speedup, resulting in a total speedup of **6.18x** over the original HunyuanVideo. - Moreover, AsymRnR is also compatible with other caching methods. We compared PAB and PAB + AsymRnR on HunyuanVideo. PAB is configured using the official settings provided on the authors’ GitHub. AsymRnR reuses the HPs from Sec 4 and Appendix B without modification. **The supplementary Tab 1 and Fig 4** shows the compatibility of AsymRnR with the PAB cache method, achieving a total **1.71×** speedup without significant performance loss. - To the best of our knowledge, caching-based methods such as PAB are not compatible with step-distilled video diffusions. AsymRnR can be integrated with either caching methods or step-distilled models for acceleration, but their joint integration is beyond the scope of this work. In summary, although AsymRnR alone does not provide huge acceleration, its compatibility and ease of integration allow it to work seamlessly with other acceleration methods, offering additional benefits. --- **Q4**: What is the meaning of $d$ in Eq 2? **A4**: The $d$ denotes the dimensionality of the vector samples $X_i$, consistent with the definitions throughout the paper. We will explicitly clarify this notation in Collary 3.1 in the revision. --- **Q5**: After calculating $S$ using the formula in Sec 3.5, how is $\hat{S}$ in Eq 5 determined for $H, Q, K, V$? Is it simply the average value? **A5**: Thank you for the comment. The notation $S(A, t, b) = \mathrm{BSM}(A, t, b)$ appears in Sec 3.5 (P6, left column, line 297), where $A \in {H, Q, K, V}$ represents the feature type, and $t$ and $b$ denote the timestep and block. This indicates that $S$ (and $\hat{S}$) is computed separately for each of $H$, $Q$, $K$, and $V$. No averaging or aggregation is applied for our method. We will include the definition of $S$ and the explanation above immediately after Eq 5 in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts. My concerns have been mostly addressed and I will raise the score. --- Reply to Comment 1.1.1: Comment: Dear reviewer wNPy, We would like to express our sincere gratitude to you for acknowledging our work and providing constructive suggestions. Many thanks for the time and effort you took to review our work.
null
null
null
null
null
null
Pivoting Factorization: A Compact Meta Low-Rank Representation of Sparsity for Efficient Inference in Large Language Models
Accept (poster)
Summary: I'm providing my whole review here, rather than giving partial and discontinued comments here and there. The paper proposes an extension to SVD-based low-rank approximation of weights. The insight being employed is that a rank r matrix has at most r linearly independent rows. Therefore, a UV decomposition, such as via a truncated SVD, causes the decomposition to be redundant since it employs 2r vectors overall. To bypass this redundancy, a pivoting factorization is used, where one of the matrices is pivoted such as its upper half is an identity matrix, which need not be stored, and the rest constitute the coefficient matrix which multiplies the pivot rows. This is a nice application of basic linear algebra. In section 4, the weight decomposition is initialized using SVD-LLM, and then an online algorithm to refine the decomposition is proposed. SVD-LLM is not the SOTA for low-rank decomposition, and therefore, this may explain why significant accuracy loss is reported in the experimental section. I'd like to suggest for the authors to instead use a SOTA technique, such as ESPACE (NeurIPS 2024). At the very least, can we have a comparison with ESPACE which showed much better accuracy vs compression trade-off? E.g., they showed <1PPL increase at 50% compression for Llama2-7B as opposed to PIFA which increases the PPL from ~5 to ~12. If the authors accept my suggestion above, then the following comment will not matter. However, if they chose to stick with SVD-LLM, then please address the following. In equations (4) and (5), a nice online algorithm for finding an optimal U matrix minimizing layer output Frobenius norm is shown. But note that this is fixes matrix V, which need not be optimal. This may explain why SVD-LLM performs so poorly anyway. If ESPACE is used instead, we'd find a provably optimal projection matrix which will reduce dimensionality of activations, and weights by multiplication associativity. Then the online accumulation error minimization reconstruction can still be employed to further refine the matrix resulting from weight projection; it would fall into place in lieu of matrix U in the presented setup. If the above is rejected (I hope not). Does it then make sense to have an alternating optimization algorithm, such as an EM, rather than just refining V once in equation (8)? The reliance on a mix of dense and compressed data-flows in eq. (7) is interesting. But setting Lambda = 0.25 means most of the information comes from the low-rank branch. Once the model becomes more accurate, does it make sense to increase the value of lambda progressively to allow for more fitting to the golden uncompressed baseline? Perplexity results are interesting (even though the number are not impressive, likely due to what I discussed above). But can we have more evaluations on more downstream tasks such as LM eval harness and MMLU? Claims And Evidence: Please see main review. Methods And Evaluation Criteria: Please see main review. Theoretical Claims: Please see main review. Experimental Designs Or Analyses: Please see main review. Supplementary Material: I did not. Relation To Broader Scientific Literature: Please see main review. Essential References Not Discussed: Please see main review. Other Strengths And Weaknesses: Please see main review. Other Comments Or Suggestions: Please see main review. Questions For Authors: Please see main review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your thoughtful feedback and the opportunity to address your concerns. **Weakness 1:** In section 4 ... PPL from ~5 to ~12. **Reply:** We have conducted additional experiments to clarify the comparison. It is important to note that **ESPACE includes a fine-tuning stage with 200B tokens** (as stated in Section 4.4 of the ESPACE paper), which significantly contributes to the <1 PPL increase at 50% compression. In contrast, the results we reported in **Tables 2 and 3** of our paper are **without any retraining**. Fine-tuning results are presented in **Table 4**, where each pruned model is only retrained for **~128M tokens**—orders of magnitude fewer than ESPACE. To ensure a fair comparison, we **reproduced the pruning step of ESPACE** and compared it against SVD-LLM. ESPACE proposes six variants: MSE (Eq. 6), MSE-NORM (Eq. 7), GO-MSE (Eq. 8), GO-MSE-NORM (Eq. 8), NL-MSE (Eq. 9), and NL-MSE-NORM (Eq. 9). We exclude the NL-MSE variants from our comparison as they rely on backpropagation, which is **infeasible on memory-constrained GPUs** due to their high resource demands. **Perplexity on WikiText2 at 50% density using LLaMA2-7B:** | Pruning Method (X) | X | X + PIFA | X + M |X + MPIFA| |-|-|-|-|-| |SVD-LLM (W)|33.27|19.64|16.55|**12.77**| |ESPACE (MSE)|280.19|144.32|20.99|**16.84**| |ESPACE (MSE-NORM)|172.30|113.82|21.73|**17.42**| |ESPACE (GO-MSE)|41.75|24.17|17.47|**13.55**| |ESPACE (GO-MSE-NORM)|37.45|23.19|17.47|**13.63**| *SVD-LLM (W)* indicates the standalone pruning output from SVD-LLM. From the results: - **SVD-LLM** performs slightly better than ESPACE’s best variant (**GO-MSE-NORM**) under comparable conditions (i.e., without fine-tuning). - **PIFA**, **M**, and **MPIFA (PIFA + M)** consistently improve all ESPACE variants, demonstrating that our techniques are **general-purpose and complementary**, enhancing the performance of any low-rank pruning method. - Based on these observations, **SVD-LLM remains the strongest initialization** among all tested options when no fine-tuning is used. We have included this new comparison and discussion in the updated version of the manuscript. **Weakness 2:** In equations (4) and (5) ... presented setup. **Reply:** We agree that the quality of the initial low-rank decomposition, especially the choice of matrix $V$, can significantly influence the performance of the final reconstruction. A **better-initialized matrix $V$** generally leads to a **more accurate reconstructed matrix $U$**, resulting in lower final PPL. Currently, **SVD-LLM remains the most effective low-rank pruning method** in our setup, but this remains an open question. If future methods such as ESPACE can produce even better projections, they can naturally be integrated with MPIFA to achieve improved performance. Our framework is flexible and designed to **enhance any low-rank pruning method**, not tied to any single initialization strategy. We have added this analysis and discussion to the updated version of the manuscript. **Weakness 3:** Does it ... equation (8)? **Reply:** We assume that by "EM" you are referring to an alternating approach that iteratively updates $U$ and $V$ using Equations (5) and (8) with more than 1 round, similar to **alternating least squares (ALS)**. We are currently exploring this direction and will share the results as soon as they become available. **Weakness 4:** The reliance on a mix ... golden uncompressed baseline? **Reply:** In our preliminary findings, increasing $\lambda$ leads to **overfitting on the calibration data**, resulting in **low perplexity on the calibration set** but **high perplexity on the full WikiText2 dataset**. To mitigate this overfitting, we explored several strategies: 1. **Reducing $\lambda$**, as mentioned in the right column of line 256, where the low-rank data flow acts as a form of regularization. 2. **Increasing the size of the calibration dataset**, which could help improve generalization. 3. Applying **additional regularization techniques** to control overfitting. With **sufficient calibration data**, increasing $\lambda$ may become beneficial. We are currently running experiments to test this hypothesis and will report the results as soon as they are available. **Weakness 5:** Can we have more evaluations on more downstream tasks such as LM eval harness and MMLU? **Reply:** We have incorporated **zero-shot evaluations on 8 downstream tasks** from the **SuperGLUE benchmark**, using the `lm-evaluation-harness` framework as recommended. The results, available at https://anonymous.4open.science/r/PIFA-68C3/zero_shot.png, show that **MPIFA_NS outperforms other low-rank baselines on 6 out of 8 tasks**, and reduces the average accuracy gap by **42.8%** compared to the best-performing baseline. These evaluations have been added to the revised manuscript. We are currently working on extending our evaluation to include **MMLU** and will provide updates as results become available. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response. I think my suggestion was not fully understood so I wish to clarify that. When weight are "low-rankified" the effective model size reduces and model expressivity goes down. ESPACE showed that as a remedy to this issue, one can apply low-rankification to activations such that weight parameters and optimizer states are fully available for continuous training. This setup is more interesting than the simple ad-hoc one-shot compression. And in this setup, ESPACE is known to be SOTA due to its activation centricity. Where it gets interesting is that after continuous training, once projection matrices and weights are pre-computed and frozen, we obtain a SOTA compressed model with a low-rank structure in the GEMM layers. I think it will be very cool to apply PIFA on top of that in order to prune out excess redundancy in the low-rank structure itself. This has the potential to improve on the current SOTA set by ESPACE. I urge the authors to consider this as future work. For now, given that the response is satisfactory, I maintain my score of a weak accept. Good job. --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful feedback. We truly appreciate the reviewer’s insights on ESPACE. We find the activation-centric approach introduced in ESPACE to be a conceptually elegant and practically impactful advance in low-rank LLM compression. While other methods like FWSVD, ASVD, and SVD-LLM focus on using SVD to decomposing the weight matrix directly, ESPACE reframes the problem as finding an optimal low-rank projection matrix for activations, minimizing $\|PP^T X - X\|$, and leverages matrix multiplication associativity to yield compressed weight structures at inference time. In this sense, ESPACE is particularly well-suited for continuous training. Inspired by the reviewer’s suggestion, we conducted a **new experiment that integrates PIFA with the ESPACE-compressed model after continuous training**. We fine-tuned a ESPACE-compressed LLaMA2-7B model at 80% density. Because of time limitation, we only finetuned using 128M tokens. We compare the results of ESPACE alone and applying PIFA on top of ESPACE. **Evaluation Results (LLaMA2-7B @ 80% Density, 128M Token Fine-tuning)** | Metric | ESPACE Only | ESPACE + PIFA | |----------------------------|------------|---------------| | **All Parameters** | 5.18B | 4.31B | | **GPU Memory** | 10.4G | 8.8G | | Dataset | ESPACE PPL | ESPACE + PIFA PPL | |-------------|----------------|-------------------| | WikiText2 | 6.5009 | 6.5008 | | C4 | 10.1392 | 10.1395 | 1. **Lossless Compression**: The perplexity difference between ESPACE and ESPACE+PIFA is negligible (<0.001), confirming that **PIFA introduces no additional loss**. 2. **Efficiency Gains**: The **overall model memory footprint** is further reduced when PIFA is applied—resulting in **~15% lower GPU memory usage** even after compression. 3. **Complementary Design**: PIFA effectively **prunes residual redundancy** in the already-compressed low-rank structure, showcasing its **value as a drop-in, lossless post-processing plugin** for the SOTA compressed model of ESPACE. We fully agree with the reviewer’s vision: PIFA could be a **natural complement to ESPACE**, helping it **further improve compression efficiency without compromising accuracy**. We view this as an exciting direction for future research and hope this preliminary result illustrates its potential. Thank you once again for your thoughtful suggestions. We deeply appreciate the time and care you’ve taken in reviewing our work.
Summary: This paper is concerned with sparse inference, which aims to speed-up LLMs via sparsity. It is argued in this paper that previous methods either require specific hardware (e.g., semi-structured pruning) or yield degraded performance (low-rank pruning). This paper aims to propose a low-rank pruning named PIFA method that realizes decent performance yet impose no requirements to hardware. Specifically, PIFA firstly uses a pivot-row discovery to eliminate potential representation redundancy when conducting low-rank decomposition, and then utilizes a minimization-reconstruction estimator to alleviate potential accumulated errors across layers. The experimental results in terms of both effectiveness (perplexity) and efficiency (throughput) demonstrate the usefulness of PIFA. Essential ablation studies show the adequateness of design choices. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods are mostly clear and the evaluation criteria is adequate. However, I still have several concerns: 1) It is not very clear how QR or LU decomposition serve as the backbone of finding pivot rows would differ from each other. 2) It would be much better to compare to structured pruning also as baselines. Theoretical Claims: The proofs for theoretical claims are correctly justified. Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. However, I still have several concerns: 1) considering structured pruning also as baselines would strengthen the contributions of this work. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: How about the memory footprint of PIFA and baselines. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable insights. We are glad to address your concerns and provide clarifications. **Weakness 1:** It is not very clear how QR or LU decomposition serve as the backbone of finding pivot rows would differ from each other. **Reply:** The key idea behind PIFA is to select a set of **linearly independent pivot rows** that can be used to express the remaining non-pivot rows as linear combinations. Since a matrix of rank *r* contains multiple valid sets of *r* linearly independent rows, any such set can serve the purpose of PIFA. **QR decomposition with column pivoting** and **LU decomposition with row pivoting** are variants of QR and LU respectively, designed to improve numerical stability compared with original QR or LU. Their pivoting mechanisms reorder the columns (or rows), making the leading *r* columns (or rows) guaranteed to be linearly independent. The corresponding permutation matrix—produced as a byproduct of either decomposition—can then be used to identify the indices of the pivot rows (or columns). In summary, both **QR and LU decomposition with pivoting** can be used to extract a valid set of pivot indices, and **they are mathematically lossless in the infinite precision setting**. To further clarify their practical differences under limited numerical precision, we conducted additional experiments comparing the **residual error** when using pivot rows to reconstruct non-pivot rows under both **Float32** and **Float16** precision: | Precision | Method | Residual (±)| |--|-|-| |Float32|QR|8.80e-13 ± 3.51e-14| |Float32|LU|1.94e-12 ± 1.18e-13| |Float16|QR|9.56e-08 ± 3.71e-09| |Float16|LU|2.16e-07 ± 1.59e-08| These results are averaged over 10 random singular matrices. While both methods yield very low residual errors, **QR with pivoting achieves lower numerical error than LU**. This suggests that **QR decomposition with pivoting is the preferred choice** for identifying pivot rows in PIFA. We appreciate the reviewer’s question and have included both this explanation and the supporting experiment in the updated manuscript. **Weakness 2:** It would be much better to compare to structured pruning also as baselines. **Reply:** Thank you for the suggestion. To address this, we have included a structured pruning baseline, **LLM-Pruner**, in the updated version of the manuscript. Below is the perplexity comparison on WikiText2 using the LLaMA2-7B model across various parameter densities: |Method|90%|80%|70%|60%|50%|40%| |-|-|-|-|-|-|-| |LLM-Pruner|6.58|8.81|13.70|40.49|126.0|1042| |MPIFA|5.69|6.16|7.05|8.81|12.77|21.25| On average, **MPIFA reduces the perplexity gap by 87.2%** across these density levels compared to LLM-Pruner. We also benchmarked **inference speedup** and **memory usage** relative to dense linear layers, using linear layers of different dimensions on an A6000 GPU: **Speedup over dense linear (higher is better):** |Method (density)|d=16384|d=8192|d=4096| |-|-|-|-| |PIFA (55%)|1.88×|1.70×|1.43×| |LLM-Pruner (55%)|1.81×|1.77×|1.67×| |LLM-Pruner (70%)|1.42×|1.41×|1.35×| **Memory usage relative to dense (lower is better):** |Method (density)|d=16384|d=8192|d=4096| |-|-|-|-| |PIFA (55%)|0.56×|0.58×|0.64×| |LLM-Pruner (55%)|0.56×|0.58×|0.65×| |LLM-Pruner (70%)|0.70×|0.72×|0.75×| At the same density (55%), PIFA achieves similar speedup and memory efficiency as LLM-Pruner. When comparing MPIFA at 55% density to LLM-Pruner at 70% density, MPIFA consistently offers **lower perplexity, faster inference, and reduced memory usage**. We have incorporated these comparisons into the revised manuscript. **Weakness 3:** How about the memory footprint of PIFA and baselines. **Reply:** Thank you for the question. We provide a detailed analysis of the memory footprint for both **inference** and **compression**. **Memory Footprint During Inference:** - As shown in Figure 5, PIFA consistently achieves lower memory usage compared to low-rank layers at the same rank. For example, at $r/d = 0.5$, PIFA **losslessly compresses** the memory of the low-rank layer by **24.2%**. - Table 5 demonstrates that PIFA at 55% density uses **slightly less memory** than a 2:4 semi-sparse layer. - Table 6 shows that MPIFA_NS at 55% density reduces **end-to-end memory usage** by **42.8%** on LLaMA2-7B and **43.8%** on LLaMA2-13B. **Memory Footprint During Compression:** We report **peak memory usage** during compression for each method: |Model|ASVD|SVD-LLM|PIFA|M| |-|-|-|-|-| |LLaMA2-7B|15G|20G|0.5G|6G| |LLaMA2-13B|30G|25G|1G|10G| For **memory efficiency** in method M: 1. **Online calibration** is used, where only the current sample is loaded to GPU. Other samples remain on CPU until needed. 2. Only the **current pruning layer** is loaded to GPU, while all other layers remain on CPU during processing. We have included this analysis of memory footprint during both inference and compression in the updated version of the manuscript.
Summary: The authors propose a novel factorization method and reconstruction objective for LLM compression. Without requiring retraining, the method achieves perplexity performance comparable to semi-structured pruning at a 50% compression rate. Experimental results further demonstrate that the approach is efficient in both inference speed and memory usage. Claims And Evidence: The claim that this work is the first to achieve performance comparable to semi-structured pruning, while surpassing it in GPU efficiency and compatibility, appears vague and potentially overstated. Recent works such as MoDeGPT [1] and DISP-LLM [2] have demonstrated similar results. [1] Lin, Chi-Heng, et al. "Modegpt: Modular decomposition for large language model compression." arXiv preprint arXiv:2408.09632 (2024). [2] Gao, Shangqian, et al. "Disp-llm: Dimension-independent structural pruning for large language models." Advances in Neural Information Processing Systems 37 (2024): 72219-72244. Methods And Evaluation Criteria: The LLM evaluation performances are missing in the experiments. Theoretical Claims: They seem correct to me. Experimental Designs Or Analyses: 1. Missing LLM Evaluation Metrics: The paper does not report standard LLM evaluation metrics (e.g., perplexity or downstream task accuracy) on widely used benchmarks. Including these results is important for assessing real-world performance. 2. Compression Resource Comparison: The paper lacks an analysis of the computational resources (e.g., runtime, memory, compression time) required to perform the proposed compression, which is essential for evaluating practicality. 3. Missing Structured Pruning Baselines: Key structured pruning baselines are omitted from the comparison. In particular, methods like LLM-Pruner [1] and the SLEB layer pruning strategy [2] should be included. Although these methods may yield lower accuracy, they are highly efficient and result in fast, compact models at inference time. 4. Retraining / Fine-tuning Comparison: How does the proposed method perform when retraining or recovery fine-tuning is allowed? A fair comparison with other methods should also include their best-case results when post-compression fine-tuning is applied. [1] Ma, Xinyin, Gongfan Fang, and Xinchao Wang. "Llm-pruner: On the structural pruning of large language models." Advances in neural information processing systems 36 (2023): 21702-21720. [2] Song, Jiwon, et al. "Sleb: Streamlining llms through redundancy verification and elimination of transformer blocks." arXiv preprint arXiv:2402.09025 (2024). Supplementary Material: I checked the material for the proofs and additional experiments. Relation To Broader Scientific Literature: Model compression is important for efficient AI. Essential References Not Discussed: 1. Structured Compression via Layer Pruning: The paper omits comparisons with recent structured compression methods that use layer pruning strategies, such as SLEB [1] and ShortGPT [2]. While these approaches may trade off some accuracy, they are highly efficient in terms of compression speed and inference latency. 2. Structured Methods Approaching Semi-Structured Performance: Recent works like DISP-LLM [3] have demonstrated that structured compression can approach or match the performance of semi-structured pruning. [1] Song, Jiwon, et al. "Sleb: Streamlining llms through redundancy verification and elimination of transformer blocks." arXiv preprint arXiv:2402.09025 (2024). [2] Men, Xin, et al. "Shortgpt: Layers in large language models are more redundant than you expect." arXiv preprint arXiv:2403.03853 (2024). [3] Gao, Shangqian, et al. "Disp-llm: Dimension-independent structural pruning for large language models." Advances in Neural Information Processing Systems 37 (2024): 72219-72244. Other Strengths And Weaknesses: Strengths: 1. The proposed method demonstrates strong perplexity performance, outperforming many existing approaches. 2. The decomposition technique appears novel. 3. The paper includes thorough ablation studies and discusses method efficiency in detail. Weaknesses: 1. The experimental results lack standard LLM evaluation metrics (e.g., downstream task accuracy or benchmark scores), which are important for assessing overall effectiveness. 2. Comparisons with structured compression methods are limited, particularly regarding the accuracy-efficiency trade-off (e.g., inference speed, latency). Other Comments Or Suggestions: No minor suggestions. Questions For Authors: 1. Could you include standard LLM evaluation results (e.g., downstream tasks or benchmark datasets) to better assess the practical effectiveness of your method? 2. Could you add comparisons with important structured compression baselines, and analyze their accuracy–efficiency trade-offs (e.g., perplexity vs. inference speed or memory usage)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. **Weak1:** The claim ... overstated. **Reply:** We have removed the phrase “for the first time” from the abstract. **Weak2:** Missing LLM Evaluation ... **Reply:** We have expanded our evaluation to include both **perplexity** and **downstream task accuracy** on widely adopted benchmarks. For perplexity, we report results on the C4 dataset, a large-scale and diverse corpus commonly used for evaluating LLMs. Across various compression densities, MPIFA consistently outperforms existing low-rank pruning baselines. Specifically, MPIFA reduces the perplexity gap by: - **47.6%** on LLaMA2-7B - **34.5%** on LLaMA2-13B - **55.3%** on LLaMA2-70B - **62.6%** on LLaMA3-8B on average across all densities, compared to the best-performing low-rank pruning method. To further assess real-world utility, we conducted zero-shot evaluations on the SuperGLUE benchmark, covering 8 downstream tasks using the `lm-evaluation-harness` framework. MPIFA_NS achieves the highest accuracy on **6 out of 8 tasks**, and reduces the average accuracy gap to the dense model by **42.8%**, compared to the strongest low-rank baseline. The full evaluation results are available here: - C4 PPL results: https://anonymous.4open.science/r/PIFA-68C3/c4_ppl.png - SuperGLUE zero-shot results: https://anonymous.4open.science/r/PIFA-68C3/zero_shot.png We appreciate the reviewer’s suggestion and have incorporated these evaluations into the revised manuscript. **Weak3:** Compression Resource Comparison ... practicality. **Reply:** We have added a detailed comparison of the **compression time** and **peak memory usage** during compression across different methods. Compression Time (on A6000 GPU): |Model|ASVD|SVD-LLM|PIFA|M| |-|-|-|-|-| |LLaMA2-7B|10h|30 min|15 min|30 min| |LLaMA2-13B|20h|1h|30 min|1h| Peak Memory Usage During Compression: |Model|ASVD|SVD-LLM|PIFA|M| |-|-|-|-|-| |LLaMA2-7B|15G|20G|0.5G|6G| |LLaMA2-13B|30G|25G|1G|10G| Notes: - All compression times are measured on a single A6000 GPU. On an A100 GPU, the compression time is approximately half. - For the "M" method, we report only the reconstruction time, excluding the time taken by the low-rank pruning step. As for **memory efficiency** in method M: 1. Online calibration is used, where only the current sample is loaded to GPU. Other samples remain on CPU until needed. 2. Only the current pruning layer is loaded to GPU, while all other layers remain on CPU during processing. These comparisons have been included in the revised manuscript to better reflect the practicality of our method. **Weak4:** Missing Structured Pruning ... time. **Reply:** We have conducted additional experiments to include **LLM-Pruner** as a structured pruning baseline. Below is the perplexity comparison on WikiText2 using the LLaMA2-7B model at various densities: |Method|90%|80%|70%|60%|50%|40%| |-|-|-|-|-|-|-| |LLM-Pruner|6.58|8.81|13.70|40.49|126.0|1042| |MPIFA|5.69|6.16|7.05|8.81|12.77|21.25| On average, MPIFA reduces the perplexity gap by **87.2%** compared to LLM-Pruner. We also benchmarked the **inference speedup** of PIFA and LLM-Pruner layers (relative to dense linear layers) on an A6000 GPU across different hidden dimensions: |Method (density)|d=16384|d=8192|d=4096| |-|-|-|-| |PIFA (55%)|1.88×|1.70×|1.43×| |LLM-Pruner (55%)|1.81×|1.77×|1.67×| |LLM-Pruner (70%)|1.42×|1.41×|1.35×| **Memory usage** during inference, relative to dense linear: |Method (density)|d=16384|d=8192|d=4096| |-|-|-|-| |PIFA (55%)|0.56×|0.58×|0.64×| |LLM-Pruner (55%)|0.56×|0.58×|0.65×| |LLM-Pruner (70%)|0.70×|0.72×|0.75×| At the same density (55%), PIFA achieves similar speedup and memory efficiency as LLM-Pruner. When comparing MPIFA at 55% density to LLM-Pruner at 70% density, MPIFA consistently offers **lower perplexity, faster inference, and reduced memory usage**. We have included this comparison with LLM-Pruner in the updated manuscript. Additionally, we are evaluating other recent structured pruning methods and will report those results as they become available. **Weak5:** Retraining ... is applied. **Reply:** Fine-tuning experiments are already included in Table 4 of the original manuscript. All pruned models are retrained for **one epoch** on a mixed dataset consisting of 2% WikiText2 and 98% C4, to recover performance. This setup ensures a fair and consistent comparison across all methods. The results show that **fine-tuned MPIFA continues to outperform other low-rank pruning methods**, and also achieves slightly better performance than fine-tuned semi-sparse models. Details of the fine-tuning setup can be found in Appendix B.3. **Weak6:** Structured Compression ... of semi-structured pruning. **Reply:** We have updated the manuscript to include discussions of SLEB, ShortGPT, and DISP-LLM in the related work section. We appreciate the reviewer’s recommendation to acknowledge them. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. While I believe the most critical concern remains the limited evaluation on LLM tasks — which are arguably more important than perplexity alone, yet only results for a 55% compression rate are reported — the authors have addressed most of the other questions raised. I am leaning toward acceptance and will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful response. To address the reviewer’s concern regarding the limited evaluation of LLM tasks across compression rates, we have included additional **zero-shot evaluations at multiple compression rates [40%, 50%, 60%, 70%, 80%, 90%]** on the **SuperGLUE benchmark** using the LLaMA2-7B model. We report **zero-shot accuracy (↑)** across SuperGLUE tasks at different parameter densities, at this link: https://anonymous.4open.science/r/PIFA-68C3/zero_shot_7b_all.png Importantly, **MPIFA achieves the highest mean accuracy across all density levels**, consistently outperforming other low-rank methods. To further extend the evaluation to other model sizes, we have also included results for **LLaMA2-13B** at 55% compression, available at this link: https://anonymous.4open.science/r/PIFA-68C3/zero_shot_13b.png MPIFA also achieves the **highest mean accuracy** among low-rank methods on **LLaMA2-13B**, further demonstrating the generality and robustness of our approach. We are currently running additional experiments across multiple compression ratios on LLaMA2-13B and other model sizes, and will continue to update the results. We hope this extended evaluation further clarifies MPIFA’s performance across a broader range of settings. These results have been included in the revised manuscript. We sincerely appreciate your time, thoughtful feedback, and constructive suggestions throughout the review process.
Summary: This submission addresses the significant performance degradation observed with low-rank pruning techniques. It proposes Pivoting Factorization (PIFA), a novel lossless meta low-rank representation that unsupervisedly learns a compact form of any low-rank representation, effectively eliminating redundant information. To mitigate the performance degradation caused by low-rank pruning, we introduce a novel, retraining-free low-rank reconstruction method that minimizes error accumulation (M). The authors mentions that their framework MPIFA for the first time, achieves performance comparable to semi-structured pruning, while surpassing it in GPU efficiency and compatibility. Claims And Evidence: 1. Comparative speedup of PIFA + MPIFA: Yes, the authors speedup evaluation across different GPUs + Kernels across different $d$ as well as end-to-end efficiency evaluation in Table 5, 6 are significantly important and convincing. 2. Rich ablation experiments across the calibration sample size, mix ratio etc are interesting and bolster the claim of superiority of the proposed method. Methods And Evaluation Criteria: The serious and most important concern of the submission is the evaluation strategy adopted in the paper. The authors have failed to extend their evaluation beyond perplexity which have issues in true reflection of compressed model capabilities. Even the reported perplexity is only limited for Wikitext2 dataset without enough details (e.g. seqlen etc.). I strongly recommend the authors to extend their evaluation to other PPL datasets like C4 and incorporate task-centric evaluation using existing tools like LMEvalHarness etc. In addition, it will be also interesting to see how the proposed method extend beyond only one single family of models (LLaMa 2/3) - may be on some MoE style models. Theoretical Claims: The authors have very well described related theory in simple way. Experimental Designs Or Analyses: Experimental design is very good and exhaustive to consider various cases such as impact of caliberation data, analysis of PIFA and MPIFA contriibutions, speedups etc. Evaluation strategy need to be improved to make the paper contribution/performance claims strong. Supplementary Material: Majority of supplementary is reviewed. Relation To Broader Scientific Literature: The key contributions of paper will advance general hardware-friendly LLM compression community. Essential References Not Discussed: Related work requires some upgrade. Some recent relevant literature need to be discussed in the Related work: 1. Saha, R., Sagan, N., Srivastava, V., Goldsmith, A., & Pilanci, M. (2024). Compressing large language models using low rank and low precision decomposition. Advances in Neural Information Processing Systems, 37, 88981-89018. 2. Kaushal, A., Vaidhya, T., & Rish, I. (2023). Lord: Low rank decomposition of monolingual code llms for one-shot compression. arXiv preprint arXiv:2309.14021. 3. Sharma, P., Ash, J. T., & Misra, D. (2023). The truth is in there: Improving reasoning in language models with layer-selective rank reduction. arXiv preprint arXiv:2312.13558. 4. Jaiswal, A., Yin, L., Zhang, Z., Liu, S., Zhao, J., Tian, Y., & Wang, Z. (2024). From galore to welore: How low-rank weights non-uniformly emerge from low-rank gradients. arXiv preprint arXiv:2407.11239. 5. Wang, Q., Ke, J., Tomizuka, M., Chen, Y., Keutzer, K., & Xu, C. (2025). Dobi-SVD: Differentiable SVD for LLM Compression and Some New Perspectives. arXiv preprint arXiv:2502.02723. Other Strengths And Weaknesses: The submission have significant innovation from both PIFA and MPIFA perspective. Evaluation is lacking and I am willing to increase score once authors provides some convincing experiments during rebuttal. Other Comments Or Suggestions: 1. Authors introduce two variables r and d directly in abstract and introduction without explicitly defining it. Although simple to interpret, I encourage authors to first define any variable before using them. 2. The writing of the paper can be significantly improved. It important to understand what is important and what is not - to guide which sections should find a space in main draft and which can be moved in supplementary. For example, details of Computational cost of PIFA section in 3.2 can be moved to supplementary while listing the key finding. Many parts of the main submission in section 5.3 refers to results in supplementary and not standalone. I recommend authors to make the main draft as independent as possible while moving not-so-important ablations in supplementary. Questions For Authors: 1. The paper mentions that PIFA is fully differentiable, suggesting potential integration into the training stage. Have the authors explored this direction? 2. How can this method integrated and benefit some recent quantization techniques like ZeroQuant-v2. Yao, Z., Wu, X., Li, C., Youn, S., & He, Y. (2023). Zeroquant-v2: Exploring post-training quantization in llms from comprehensive study to low rank compensation. arXiv preprint arXiv:2303.08302. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. We appreciate the opportunity to address your concerns. **Weakness 1: (Expand evaluation)** The authors have failed to ... existing tools like LMEvalHarness etc. **Reply:** Thank you for raising this concern. We have extended our evaluation in two directions: 1. **C4 Perplexity (PPL) Evaluation**: We report perplexity on the C4 dataset across various model sizes and parameter densities. The results demonstrate that **MPIFA significantly outperforms existing low-rank pruning methods**, reducing the perplexity gap by - **47.6%** on LLaMA2-7B - **34.5%** on LLaMA2-13B - **55.3%** on LLaMA2-70B - **62.6%** on LLaMA3-8B on average across all densities, compared to the best-performing baseline. Full results: https://anonymous.4open.science/r/PIFA-68C3/c4_ppl.png Example results at 50% density: | Model | SVD | ASVD | SVD-LLM | MPIFA | |--------------|--------|---------|---------|----------| | LLaMA2-7B | 58451 | 25441 | 129.8 | **52.01** | | LLaMA2-13B | 18196 | 3537 | 110.4 | **42.03** | | LLaMA2-70B | 7045 | OOM | 44.10 | **29.04** | | LLaMA3-8B | 143573 | 108117 | 784.8 | **257.4** | 2. **Task-Centric Evaluation**: We conducted **zero-shot evaluation** on the **SuperGLUE benchmark** using 8 downstream tasks via the lm-evaluation-harness (https://github.com/EleutherAI/lm-evaluation-harness) framework. All methods use WikiText2 as the calibration dataset, and follow the same configuration as in the main manuscript. Results: https://anonymous.4open.science/r/PIFA-68C3/zero_shot.png MPIFA_NS achieves the best accuracy on **6 out of 8 tasks** and reduces the **average accuracy gap by 42.8%** compared to the best-performing low-rank pruning baseline. We appreciate the reviewer’s suggestion. These additional evaluations have been added to the revised manuscript. **Weakness 2:** Wikitext2 dataset doesn't contain enough details (e.g. seqlen etc.). **Reply:** Thank you for the feedback. The sequence length used in all experiments, including both WikiText2 and C4, is **2048**. We have updated the manuscript to include this information. **Weakness 3:** In addition, it will be also interesting to see how the proposed method extend beyond only one single family of models (LLaMa 2/3) - may be on some MoE style models. **Reply:** Thank you for the suggestion. We are currently exploring this and will report the results here as soon as possible. **Weakness 4: (Expand related work)** Related work requires some upgrade ... arXiv:2502.02723. **Reply:** Thank you for the suggestion. We have updated the manuscript and included these articles in the related work section. **Weakness 5:** Authors introduce two variables r and d directly in abstract and introduction without explicitly defining it. **Reply:** Thank you for the suggestion. We have revised the original phrasing “at $r/d = 0.5$” to “at rank equal to half of the dimension” for clarity. **Weakness 6: (Improve the article's structure)** The writing of the paper ... in supplementary. **Reply:** Thank you for this valuable suggestion. We have moved Section 3.3 to the appendix while retaining the key conclusions about the computational and memory cost of PIFA in the main paper. Since the final version allows one additional page, we have also moved the previously referenced plots from the appendix (cited in Section 5.3) into the main body, ensuring the draft is more self-contained without exceeding the page limit. **Weakness 7:** The paper mentions that PIFA is fully differentiable, suggesting potential integration into the training stage. Have the authors explored this direction? **Reply:** Thank you for raising this interesting point. We are happy to share our preliminary findings. While PIFA ensures **lossless conversion in the forward pass**, it does **not guarantee identical gradient descent dynamics** compared to traditional low-rank training methods, as different factorizations can lead to different gradients. **Weakness 8:** How can this method integrated and benefit some recent quantization techniques like ZeroQuant-v2. **Reply:** Thank you for the suggestion. We are currently exploring this and we will provide further updates as soon as possible.
null
null
null
null
null
null
Understanding the Limits of Deep Tabular Methods with Temporal Shift
Accept (poster)
Summary: - The paper analyses temporal splits for tabular DL. It proposes a new splitting strategy and also analyzes how random splitting affects performance. Additionally, the authors propose temporal embeddings, using fourier transformation and somewhat following the ideas proposed in [1] with PLR embeddings --- [1] Gorishniy, Yury, Ivan Rubachev, and Artem Babenko. "On embeddings for numerical features in tabular deep learning." Advances in Neural Information Processing Systems 35 (2022): 24991-25004. Claims And Evidence: - The claim that the proposed new temporal split is superior is not supported by the results (Figure 5) as the random split seems to perform nearly identical - While that claim is not met, the finding that the random split outperforms the split from Rubachev (2025) is an interesting finding itself. If the claims in the abstract/contributions that the newly proposed method "offers substantial improvements" are scaled down a bit, this is not an issue - The proposed temporal embedding does also not offer any improvement over the known PLR embeddings (Figure 8) Methods And Evaluation Criteria: - Yes the benchmarks are very solid, however there are some questions I have regarging the results: - Figure 8 on the left you have MLP and MLP-PLR but then have PLR as an embedding strategy. What do you use in MLP-PLR for the other embedding strategies and how does MLP differ from MLP-PLR when using PLR embedding strategy? - Since in Figure 8 you analyze the random splits, how does the random split compare in Figure 3? Theoretical Claims: - Yes, however there are no proofs/theoretical contributions Experimental Designs Or Analyses: - The experimental design seems very solid with the very new and interesting models being analyzed. - However, I would be interested in how truly autoregressive tabular models are affected by the splits and by your embeddings [1]. - How are non DL models effected by the splits? I.e. boosting models, or PFNs? - More importantly, no code is provided during submission. While all results and used benchmarks are very consistent, reproducible code should be provided already during submission time. --- [1] Thielmann, Anton Frederik, et al. "Mambular: A sequential model for tabular deep learning." arXiv preprint arXiv:2408.06291 (2024). Supplementary Material: Yes, briefly looked over the results. Relation To Broader Scientific Literature: The paper very directly relates to the TabRed work proposed by [1] as also noted by the authors. Additionally, the temporal embeddings are also minor adjustments to the work proposed by [2]. The -in my opinion- most interesting finding, that random splits seem to work very well is not adequately analyzed and addressed as it directly confronts the ideas presented in [1]. --- [1] Rubachev, Ivan, et al. "TabReD: Analyzing Pitfalls and Filling the Gaps in Tabular Deep Learning Benchmarks." arXiv preprint arXiv:2406.19380 (2024). [2] Gorishniy, Yury, Ivan Rubachev, and Artem Babenko. "On embeddings for numerical features in tabular deep learning." Advances in Neural Information Processing Systems 35 (2022): 24991-25004. Essential References Not Discussed: - I wonder how these splits effect ICL models (TabICL, TabPFN v2) [1, 2]. - Other than that all necessary work is included and adequately adressed. --- [1] Qu, Jingang, et al. "TabICL: A Tabular Foundation Model for In-Context Learning on Large Data." arXiv preprint arXiv:2502.05564 (2025). [2] Hollmann, Noah, et al. "Accurate predictions on small data with a tabular foundation model." Nature 637.8045 (2025): 319-326. Other Strengths And Weaknesses: - The paper is overall very well written and the benchmarks/tests seem extremely solid. Models as new as TabM (ICLR 2025) are already included. Other Comments Or Suggestions: typos: Abstract, line 028 "analyses" should be "analyze" introduction: Not just continuous/categorical for regression classification, i.e.distributional approaches, count data/poisson Questions For Authors: - Where are the splits a), b), c) and d) coming from? Is that from Rubachev (2025) or your contribution? and why is "Ours" not shown in the graphic on the right? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We will address your concerns in the following responses. First, we would like to clarify several key differences between our work and TabReD and PLR. 1. **Difference from TabReD**: TabReD shows that real-world tabular datasets, inherently containing temporal shifts, require **temporal splits for realistic test sets**. Random splits may lead to misleading assessments, and TabReD demonstrates that temporal splits significantly alter model rankings. Our work builds upon TabReD by identifying **validation set splitting** as a crucial factor affecting model performance once the test set position is fixed. We find that, in this setup, randomly splitting the validation set yields better results than using the temporal split in TabReD. We further analyze the underlying reasons behind this observation and propose our refined temporal split. Therefore, our contribution focuses on **validation set splitting**, whereas TabReD primarily addresses **test set splitting**, making the two approaches complementary rather than overlapping. 2. **Difference from PLR emb**: Our temporal emb differs from PLR emb both in **scope and design**. PLR emb is a numerical feature emb method that samples periodicities from $N(0, \sigma)$ to capture cycles in numerical features. In contrast, our temporal emb is specifically designed to incorporate **timestamp information** into models, then treat it as a numerical input feature. It is a **plug-and-play** approach. Structurally, our method relies on **prior cycles**, making it more suitable for handling temporal patterns. Additionally, we explicitly address the challenges of **multi-period coupling** and **trend representation**, which are essential in temporal settings but are not considered in PLR emb. We also **consider PLR emb a baseline** when designing our temporal emb (figure 8 left). We hope this clarifies our contributions. Our code is available at https://anonymous.4open.science/r/Tabular-Temporal-Shift-BCCA/, with additional results. > Random split perform identical. **The random split serves as a baseline**. By analyzing its differences with temporal splits in TabReD, we identified the impact of training lag and validation bias, **motivating our temporal split strategy**. **While the random split performs well, it suffers from instability** (Std increased by 154%). Our proposed temporal split not only maintains competitive performance but also significantly improves stability, as shown in **Table B in repository**. > How are non DL/autoregressive/ICL methods effected by the splits? 1. We have presented the impact of non-DL methods (including Linear, Boosting methods, and Random Forest) under different splits in **Figure 2, Figure 5, and the Appendix (page 13)**. These methods consistently show improvements, with generally better performance in our new temporal split. 2. We tested the **Mambular** method. Due to the large size of the TabReD dataset and the inefficiency of autoregressive methods, we only provided results for six datasets. Under our split, its performance showed a significant improvement (+4.56%), as shown in Table E in repository. 3. We tested **TabPFN v2**. Since the general model does not require training, we adjusted its context samples: in the Original split, we randomly selected 10,000 context samples, while in Ours, **we selected the 10,000 context samples closest to the test set**. This also led to an improvement (+0.71%), as shown in Table E. > Since Fig 8 you analyze random splits, how does random split compare in Fig 3? Figure 8 focuses on comparing **our updated temporal split** with the improvement brought by adding temporal embs. **The random split is only presented in Figures 2 and 5**. In Figure 3, the bar chart on the right compares the performance of four specifically constructed splits to analyze the effects of reducing training lag, mitigating validation bias, and ensuring validation set equivalence. **Since these splits contain different amounts of data, their results cannot be directly compared to those of the Original, Ours, or Random splits**. > Where are the splits a), b), c) and d) coming from? The splits a), b), c), and d) are **entirely our contribution**. Our work focuses on different aspects compared to TabReD. > Why is "Ours" not shown in the graphic on the right? The (a,b,c,d) splits are used **only for analysis** of training lag, validation bias, and validation equivalence, which can be seen as an **ablation study**. These splits were created by **discarding data** due to dataset limitations. In contrast, both the original and our temporal splits use the entire dataset before $T_{train}$. As a result, **they cannot be directly compared with the Original or Ours**. We hope this response addresses your concern. Please feel free to raise any further questions! --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your answers and clarifications. > How are non DL/autoregressive/ICL methods effected by the splits? Thank you for these experiments. Improvements for TabPFN and even for autoregressive models (although it seems to perform very poorly in general) are very interesting. I would appreciate it, if you included all of these in the paper/appendix. --- As a result of your clarifications/efforts during the rebuttal, I have adjusted my score. 2-> 3 --- Reply to Comment 1.1.1: Comment: Thank you for your engagement and effort during the review process! We are also glad that our efforts to address your concerns were helpful. All discussed revisions will be carefully incorporated into the final version.
Summary: The paper investigates the impact of temporal shift in tabular data and presents a set of solutions to mitigate its effects. Since tabular data instances are typically collected in chronological order, temporal shift naturally arises. The authors first find that the commonly used time-based validation split results in worse performance compared to random splitting, and propose a refined temporal splitting protocol designed to reduce training lag and verification bias. Then, from the perspective of model representation, the authors find that the existing methods fail to capture temporal information, and propose a temporal embedding method for deep methods. The experimental results show that both the splitting protocol and the temporal embedding significantly improve the performance of the model under temporal shift scenarios. ## update after rebuttal: Most of my concerns are addressed Claims And Evidence: Yes, the claims are generally well-supported by experimental results. Methods And Evaluation Criteria: Yes, the proposed splitting protocol and temporal embedding are all tested on the TabReD benchmark, which focuses on temporal shifts in tabular data. Theoretical Claims: This paper focuses on experiment design and analysis and does not make any theoretical claims. Experimental Designs Or Analyses: The experimental design is well-structured and effectively evaluates the proposed claims and methods. Splitting protocol: Four new splits are used to verify the effects of reducing training lag, reducing validation bias, and ensuring the equivalence of the validation set, respectively, with controlled variables. The use of MMD visualization and loss distribution further supports these claims. Temporal embedding: The authors first identify the missing temporal information in MLP representations, including multiple periods and trend information. They then propose a temporal embedding that introduces period and trend information into the model. The only potential issue is that the authors do not present the model representation after incorporating their temporal embedding. Supplementary Material: I reviewed the supplementary material. The authors discuss how their implementation differs from TabReD, specifically by removing the extra numerical encoding to reveal the model's original capabilities. They also examine the impact of non-uniform dataset sampling. Due to this issue, in order to ensure that the validation set sizes across different partitions remain consistent, the validation sets actually correspond to different time spans. The authors argue that this discrepancy negatively affects validation equivalence. Finally, the supplementary material provides the complete detailed experimental results. Relation To Broader Scientific Literature: Tabular learning is generally based on the i.i.d. assumption, but this paper focuses on temporal shift, which has strong practical significance. Among recent deep tabular models, retrieval-based methods like ModernNCA have demonstrated excellent performance but are considered to perform poorly in distribution shift scenarios. This paper shows that by applying a refined splitting protocol and temporal encoding, retrieval-based methods can regain competitiveness. Essential References Not Discussed: This paper sufficiently reviews the relevant literature. Other Strengths And Weaknesses: Strengths: The paper is well structured and easy to follow. The experiment designs and results are convincing. Weaknesses: The paper compares various types of models (e.g., retrieval-based methods, ensemble-based methods) and provides some analysis in the experimental section, such as “the importance of no-lag candidates for retrieval-based methods.” However, the detailed design for these different methods is not discussed in the related work or preliminary sections. The authors claim that their splitting protocol achieves similar performance to random splitting while improving stability. However, they present the percentage change in the robustness score, which may not be intuitive. The performance of the MLP on the HD dataset is significantly weaker than that of the other methods, and the authors excluded this dataset from the percentage improvement calculation for the other methods on the MLP. This could potentially cause confusion. It is recommended to use a metric that is robust to outliers. Other Comments Or Suggestions: There are some typos in the paper. The authors only show the performance improvement of the model after applying the splitting protocol and temporal embedding. It is recommended to include a performance comparison of the methods under different protocols. Questions For Authors: Please refer to the weaknesses and comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your constructive suggestions! We will address your concerns in the following responses. > The detailed design for these different methods is not discussed. We apologize for this oversight. We will add an additional section in the preliminary in the revision to introduce the fundamentals of learning from tabular data. > They present the percentage change in the robustness score, which may not be intuitive. Thank you for your suggestion! We have now additionally provided the **change in standard deviation** in **Table B** in the repository: https://anonymous.4open.science/r/Tabular-Temporal-Shift-BCCA/. In our comparison, while our method results in a slightly higher standard deviation than the original split (+16.7%), it achieves a significantly lower standard deviation compared to random splitting (+154%). > It is recommended to use a metric that is robust to outliers. We agree. In the revision, we will replace the average percentage change calculation with a **robust average**, which excludes the maximum and minimum values when computing the percentage change across the eight datasets. We have updated Figure 2 by **Figure A** in repository, and **it continues to support the same conclusion**: random splitting significantly improves model performance compared to the original split and aligns more closely with existing benchmark results. > Include a performance comparison of the methods under different protocols. We have included a **comparison of model performance under different protocols**, specifically after applying our splitting method and temporal embedding, shown in **Figure A bottom**. All comparisons use the **robust average** to compute the average percentage change relative to the original split for MLP, ensuring direct comparison. Additionally, we provide the **average rank** of the models for a more comprehensive evaluation, shown in **Table A**. We hope this response addresses your concern. Please feel free to raise any further questions! --- Rebuttal Comment 1.1: Comment: Most of my concerns are solved and I'd like to raise the score. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and suggestions, which have greatly contributed to improving our work! We will incorporate the corresponding changes in the revision.
Summary: The paper tackles the problem of how deep tabular methods deteriorate under temporal distribution shifts, where data distributions evolve over time. It demonstrates that typical temporal splitting (training on earlier data, validating on data just slightly more recent, and then testing on even later data) can hinder performance because of a training lag (lack of recent training examples) and validation bias (the validation split may not fully reflect the larger distribution shift faced at test time). By carefully analyzing these issues, the paper proposes a new temporal splitting protocol that reduces training lag and validation bias and thereby achieves performance closer to that of a random data split, while maintaining temporal realism. ## update after rebuttal After carefully considering the points you presented and other reviewers' comments, I still believe that my initial evaluation remains accurate. Therefore, my score remains unchanged. Claims And Evidence: Yes. The claims made in the paper are mostly empirically validated with comprehensive experimental results. Methods And Evaluation Criteria: Yes. The temporal shift problem is common and the proposed method seem to resolve it accordingly as expected. Theoretical Claims: The paper does not seem to make theoretical claims. Experimental Designs Or Analyses: The experiments appear thorough and carefully controlled, with clear metrics and repeated random seeds. The design supports the main conclusions well. 1. **[Important] Experimental Design:** The authors carefully **ablate different splitting strategies** in terms of training/validation intervals, time-lags, reversed splits, etc. This isolates the roles of lag, bias, and “equivalence” in validating the test distribution. 2. **Qualitative Analysis:** They use **MMD heatmaps** to illustrate how the model’s learned feature distributions differ across time, giving a qualitative sense of whether or not periodic/temporal patterns are preserved. Supplementary Material: Yes. I carefully checked Appendix A-C for rationales behind the experimental setup and results. Relation To Broader Scientific Literature: The paper also aligns with broader trends in studying temporal distribution shifts in fields like time-series forecasting and handling “open-environment” learning. Essential References Not Discussed: To the best of my knowledge, the paper should have included the essential references. Other Strengths And Weaknesses: **Strengths** 1. **Insightful diagnosis** of where conventional temporal splits fail: the paper clarifies the subtle difference between a time-based approach that is correct for causal or forecasting tasks vs. real-world tabular tasks that remain partly cross-sectional. 2. **Lightweight method** (Fourier embedding) that is easy to replicate in practice, plus a well-reasoned splitting procedure. 3. **Extensive experimental evidence** with thorough ablations, multi-seed runs, and MMD-based visualizations. **Weaknesses** 1. **[Important]** The **temporal embedding** approach is tested primarily on a fixed set of known cycles (daily, weekly, monthly, yearly). Real data might have domain-specific cycles or more complex patterns that could require further tuning. 2. The authors do not seem to provide the code, so I remain conservative about the results reported in the paper. Other Comments Or Suggestions: 1. It might help if the authors analyzed the cost of ignoring certain periods or the potential mismatch between a fixed Fourier period (e.g., 7 days) and real data that might have a different cycle. Questions For Authors: 1. **[Important]** The authors focus on daily, weekly, monthly, and yearly cycles. Are there practical guidelines for selecting these cycles or discovering new cycles automatically in domain-specific tasks? Have you tried letting the model learn different frequencies if the known cycles are not relevant? 2. **[Important]** As the proposed splitting method do not seem to keep the original temporal order of the data samples, is it possible to cause data leakage problem? For instance, if an older sample is used for validation, and the corresponding newer sample is used in training. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful comments! We will address your concerns in the following responses. > The authors do not seem to provide the code, so I remain conservative about the results reported in the paper. Our code is now available at https://anonymous.4open.science/r/Tabular-Temporal-Shift-BCCA/. Enjoy the code! > The temporal emb approach is tested primarily on a fixed set of known cycles. **The focus of this paper is to analysis why models perform poorly in temporal shift scenarios.** We identified the absence of temporal information in model representations and **improved performance by introducing our temporal embedding, thereby completing our argument**. Handling unknown or variable cycles will be the focus of **future work**, and we will include this discussion in the **limitations** section. Regarding the approach to learning **unknown or variable cycles**: - A common approach is to **reweight training samples** [1], but such methods **require an accurate validation set** (instead of a validation set similar to the test time), which is **difficult to obtain in temporal shift tasks**. - Another method is to **introduce a matching mechanism** between the training and test set distributions (e.g., attention), but this **requires having a test distribution at test time**, meaning multiple test samples must be obtained, which is closer to test-time adaptation [2]. Based on the MMD visualization and experimental results comparison presented in the paper, we believe the existing fixed cycles already **effectively cover most scenarios (lines 379-384)**. We have also provided experimental results with variable cycles, as addressed in the next question. > Have you tried letting the model learn different frequencies if the known cycles are not relevant? Yes, we also experimented with **tuning hyperparameters** to find the cycles, results in **Table C in the repository**. When using fixed prior periods, ModernNCA achieved a 0.30% performance improvement, while **setting adjustable cycles resulted in a -2.48% performance decline**. In temporal distribution shift scenarios, due to the absence of an entirely accurate validation set, we believe that **prior knowledge of fixed cycles is more stable and interpretable than adjustable cycles**. It's also important to note that in many tasks, **complete cycles are not available**. For example, in the weather dataset, there is a yearly cycle, but the training set does not span a full year, which highlights the importance of prior knowledge. > Are there practical guidelines for selecting these cycles or discovering new cycles automatically in domain-specific tasks? For domain-specific tasks, we still recommend **using prior cycles informed by expert knowledge** or setting them **based on MMD visualization**. > Is it possible to cause data leakage problems? This splitting method does not introduce data leakage, as no information that would be unavailable at deployment is used during training. Therefore, **the model's performance on the test set remains reliable**. We hope this response addresses your concern. Please feel free to raise any further questions! --- [1] Mengye Ren et al. Learning to Reweight Examples for Robust Deep Learning. ICML 2018: 4331-4340 [2] Changhun Kim et al. AdapTable: Test-Time Adaptation for Tabular Data via Shift-Aware Uncertainty Calibrator and Label Distribution Handler. CoRR abs/2407.10784
Summary: The paper studies temporal shifts aspects of tabular data The paper has two main contributions: - First, authors propose a data splitting and validation protocol for improved model performance. The protocol aims to mitigate two phenomena discussed in the paper *1) training lag* (distances between last training timestamp and the first testing timestamp – intuitively the more up-to-date the training data is the better) *2) validation bias* (the difference between `train ↔ val` and `train ↔ test` differences – this can affect model selection in tuing and early stopping) - Second, the authors look at the temporal patterns in the data (periodicity, trends) and argue that some architectures fail to capture those. To remedy this authors propose a timestamp embedding, which improves performance for some DL architectures. Claims And Evidence: Claims are adequately (albeit not fully) supported by the experimental evidence. Here are the key aspects that should be expanded upon in the experiements in my view: **1**) The first contribution (regarding splitting strategies) makes a claim that the proposed non-standard validation procedure is better for each model, mainly relying on the fact that hold-out future test scores are better. The authors should be carefull with proposing a new general evaluation procedures (moreover non-standard ones with "backward in time" validation) because 1) better test scores do not necessarily indicate that benchmark (including the protocol) is a better reflection of real-world conditions and model performance (for example, the near-zero train/test gap might be impossible in real-world deployments). Best solution I see for this issue is – instead of proposing a new better protocol, position the work as an analysis of e.g. train/test gap, or validation bias and expanding such analyses. **2**) In the second contribution introducing the mechanisms and the utility of the periodic time embedding is not fully studied (e.g. its effects are not universal across model types) Methods And Evaluation Criteria: - Comparisons are done model-vise – e.g. all model types individually did improve their test scores, how does new strategies affect our knowledge regarding model comparisons? Does this make results more consistent with prior benchmarks? Plots with relative improvement over the baseline MLP (trained in the same setting as all other methods could be better suited for answering such questions). - The evaluation done in figure 3 do not consider the confound effects of varying the training set. Some additional experiments where only the validation and test sets are changed to model train/test gaps and validation bias effects would greatly imporve the robustness of results. - In figure 3. The "ours" variant is not present for the (a,b,c,d) splits of equivalent size. - Figure 6 analysis is done just for the smiplest MLP model. Results from figure 8 suggest that proposed embedding only helps when using simplest (not SoTA models). This effect is not investigated. Why is it so? Are SoTA models able to learn the temporal patterns without any additional embeddings? Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: There are issues in the experimental analyses that need further investigation: **1**) The result on the Homecredit Default dataset excluded from all comparisons, the exlanation of the sub-par performance provided by authors in the appendix does not seem correct: > One key difference in the implementation is that the (HD) HomeCredit-Default dataset suffers from severe class imbalance, which makes it difficult for methods with limited feature extraction capabilities, such as MLP, SNN (Klambauer et al., 2017), and DCNv2 (Wang et al., 2021b), to perform well. TabReD utilizes numerical feature encoding, which may significantly improve the performance of these models on this dataset, but the improvement is not substantial on other datasets. Publicly available TabReD benchmark results, suggest that MLPs without numerical feature embeddings (I interpreted encoding here as – embeddings for numerical features, please correct me if I'm wrong). Additionally, results on the SH (Sberbank Housing) dataset for the MLP are significantly worth here than the ones reported in the TabReD paper. Otherwise, the protocol is solid, I don't see other problems. Supplementary Material: No supplementary materials were provided. Relation To Broader Scientific Literature: The paper extends observation made in the original TabReD benchmark paper and investigates the aspects of non i.i.d validation. I find the discussion of similar procedures lacking (that are used in financial data analysis, priror kaggle competitons). Examples of relevant work in this area: - "A survey of cross-validation procedures for model selection" https://arxiv.org/abs/0907.4728 (Section 8.3) - "CVTT: Cross-Validation Through Time" https://arxiv.org/abs/2205.05393 (and related work mentioned there) - Purged K-Fold cross validation, described in from "Advances in financial Machine Learning (2018)" is often used in competitons when "backtesting". It includes validation at different time periods. Essential References Not Discussed: No essential references were missed, *but* the above discussion of relation to broader scientific literature points to relevant areas that were not discussed (data splitting strategies with non i.i.d. data in neighboring domains). Other Strengths And Weaknesses: Strengths: - A more in-depth look at handling of temporal non-i.i.d tabular data is timely and interesting, the paper is doing important work (I do not want this review and comments to discourage the ovearll direction the work is taking) - The analysis and insights into both splitting strategies and other temporal patterns in datasets are new and interesting. - The writing is good, but some points could be iterated upon and improved (I had trouble digesting the arguments) Other Comments Or Suggestions: Mostly about the presentation and writing: - When describing the experiments with training lag and validation bias, I think a clearer way (this was how I understood this better) is not to communicate the results would be e.g. split (a) has less bias compared to split (c) (not that it is without bias). Questions For Authors: - Did I understand correctly that in Figure 2. You split on train and val randomly but leave the test out temporally? If so, this should be better communicated. - What is "one-dimension PLR embedding"? - Are SoTA models able to learn the temporal patterns without any additional embeddings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We will address your concerns in the following responses. > The authors should be carefull ... Our experimental setup **strictly follows TabReD**. Moreover, since this is a temporal scenario where each test sample is evaluated individually, the **varying time gaps between each sample and training set** reflect real-world conditions. Additionally, **this does not impact our training protocol**, which only provides **guidance stated in lines 307–316**. Extensive experiments have validated the robustness of our training protocol, even when the training lag cannot be reduced to zero (**(c) vs. (d)** in Figure 3). The newly proposed temporal split is **primarily intended to illustrate the effectiveness of this protocol**, reinforcing our focus on **analyzing** training lag and validation bias. > Temporal emb is not fully studied. Hardly helps SOTA. This issue is already identified in our paper (**lines 401, 410**). Our temporal emb **converts timestamps into numerical inputs**, which may be **incompatible with PLR emb**. Specifically, once timestamps are embedded, their representation reflects temporal similarity. Applying another periodic transformation via PLR could **increase optimization difficulty**. Directly feed the temporal emb into the model backbone **consistently improves**: MLP-PLR +0.01% → +0.26%, TabM +0.07% → +0.15%, MNCA +0.30% → +0.38%. Please refer to Table D in the repository: https://anonymous.4open.science/r/Tabular-Temporal-Shift-BCCA/. > How does new strategies affect model comparisons? Please refer to our response to Reviewer 2MA7. > Fig 3 does not consider the confound effects ... Our current experiments **already addressed this issue**. Our analysis of the loss distribution (**Fig 4 right**) isolates the impact of validation and test set variations while fixing the training set. In splits (a,c), with the training set fixed earlier in time and shifts in validation and test sets, we observe in **lines 234-251** the impact of reducing training lag and validation bias. >The Ours variant is not present for the (a,b,c,d) splits of equivalent size in fig 3. The (a,b,c,d) splits are used **only for analysis** of training lag, validation bias, and equivalence. These splits were created by **discarding data** due to dataset limitations. In contrast, both the original and our temporal splits use the entire dataset before $T_{train}$. > Figure 6 done just for MLP. We will add visualizations for SOTA model representations. > The result on HD excluded from all comparisons. **HD dataset was excluded only from the comparison in Fig 2**. This is because Figure 2 calculates the improvement of other methods relative to MLP, and MLP performs poorly on the HD dataset. As a result, the relative improvement is significantly larger (~80%) compared to the average improvement on other datasets (<6%). Including this dataset in the mean obscures the result. However, all other comparisons do not rely on relative improvements over MLP, **so the HD dataset is included in all other results**. Suggested by Reviewer 2MA7, we will adopt **robust average** to compute the mean improvement in the revision to better handle such cases. > TabReD suggest that MLPs without numerical embs ... Here we are not referring to **numerical embs during training**, but **feature encoding during preprocessing**, such as noisy-quantile encoding. This will be clarified in the revision. This aspect is not mentioned in the TabReD paper but can be inferred from the code: https://github.com/yandex-research/tabred/blob/main/exp/mlp/homecredit-default/tuning.toml#L54, which assigns noisy-quantile and ordinal for the MLP method on the HD dataset. **TabReD assigns different encodings to different method-dataset pairs**. We believe this introduces fairness inconsistencies. Special encodings (e.g., noisy-quantile) do not always improve performance. Therefore, we used only basic encoding (**none for numerical and one-hot for categorical features**) for a fair comparison, despite lowering MLP’s performance on the HD dataset. This explains the difference between our results and those in the TabReD paper. > Did I understand Fig 2 correctly ? Yes. We will further clarify this. > What is 1D PLR? Sorry for this confusion. Our proposed temporal emb is specifically designed for the timestamp, we choose a **PLR emb applied to the single timestamp input** as a baseline. > Are SoTA models able to learn ... Without timestamp information, models cannot learn the order and periodicity of samples. **The challenge is incorporating temporal information**. Fig 8 shows that treating timestamps as numerical features causes performance drop, aligning with the common practice of removing timestamps. Only by combining temporal emb with periodicity knowledge can temporal patterns be effectively incorporated. Due to the limit, some responses may lack detail. Please feel free to raise any further concerns! --- Rebuttal Comment 1.1: Comment: Thanks for the extensive rebuttal response. I encourage authors to clarify the aspects discussed during the review period in the revision (especially key differences in protocol like not using numerical feature normalization). I also encourage to more clearly report results for the temporal embeddings in the SoTA architectures. In particular the aspect that it does not play well with numerical feature embedings but that this can be fixed (plus the addition of state-of-the-art models in figure 6). I remain a bit sceptical regarding the universality of the proposed splitting procedure (as a go-to recommendation for future research) but still find the result interesting for the community. Many of my initial concerns have been addressed by the rebuttal. I raised the score accordingly. --- Reply to Comment 1.1.1: Comment: We deeply appreciate your constructive suggestions and valuable feedback throughout the review process, which have greatly helped us improve our work! The changes addressed during the rebuttal will be carefully reflected in our revised version. Thank you again for your time and encouragement!
null
null
null
null
null
null
Interpreting CLIP with Hierarchical Sparse Autoencoders
Accept (poster)
Summary: The authors introduce Matryoshka SAE (MSAE), a novel Sparse Autoencoder (SAE) architecture that simultaneously learns hierarchical representations at multiple granularities. This is achieved by applying the topK operation multiple times while incrementally increasing the number of considered neurons. The proposed architecture exhibits an improved balance between reconstruction accuracy and sparsity compared to traditional SAEs (topK and ReLU SAEs). The authors apply MSAE to interpret CLIP embeddings and extract semantic concepts. Claims And Evidence: I have identified two main claims in this article : 1) MSAE achieves superior trade-offs between reconstruction and sparsity compared to traditional SAEs. This claim is backed by empirical results showing a better trade-off for MSAE against topK and ReLU SAEs. 2) MSAE effectively captures semantic concepts from CLIP embeddings. The authors propose various quantitative and qualitative analyses (concept naming, similarity search, bias validation). Methods And Evaluation Criteria: 1 ) Concerning the reconstruction/sparsity tradeoff comparison. The authors are comparing the different trade-off for topK and the ReLU SAE with the MSAE, by sampling only 3 points (i.e. different lambda of K). So few points seem to be not enough to make strong evidence for the first claim. I would urge the authors to rerun their comparison by systematically considering more lambda or K 2) Using only the reconstruction/sparsity tradeoff to compare various SAEs does not provide a complete view of the SAE performance. For example, articles have shown that SAE suffers from unstable concepts (i.e. 2 SAE trained with a different seed lead to different concepts). Such a property strongly affects the reproducibility of the SAE and impairs their use as a widespread interpretability tool (see [1]). So metrics quantifying the instability would also help to better evaluate the different SAE. But instability is just one example, there are plenty of interesting other metrics that would give a more complete comparison of the SAEs (see table 1, page 8 of [2], to have a non-exhaustive list of interesting metrics to evaluate SAEs). [1] Paulo, Gonçalo, and Nora Belrose. "Sparse Autoencoders Trained on the Same Data Learn Different Features." arXiv preprint arXiv:2501.16615 (2025). [2] Fel, Thomas, et al. "Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models." arXiv preprint arXiv:2502.12892 (2025). Theoretical Claims: There is no theoretical claims (this is an experimental article). Experimental Designs Or Analyses: Here are the weaknesses I found in the experiments : 1) Not enough data points (on topK and ReLU SAE) to strongly conclude that the MSAE exhibits a better reconstruction/sparsity than other SAEs (already mentioned before) 2) It would have been interesting to compare the MSAE to the jump ReLU SAE [1], as it is known to have a good reconstruction/sparsity tradeoff 3) Comparison with other dictionary learning methods (i.e. not SAE) that are known to work well for extracting meaningful concepts (NMF, Sparse NMF...) would strengthen the comparison. This paper is oriented toward a specific task (i.e. concept extraction), so it would be great to compare the proposed algorithms with more concept extraction methods (and SAE is far from being the only good method to extract concepts). 4) As already mentioned before, quantifying the performance of SAEs based only on the reconstruction/sparsity tradeoff is not enough. It would be interesting to include additional metrics. [1] Rajamanoharan, Senthooran, et al. "Jumping ahead: Improving reconstruction fidelity with jumprelu sparse autoencoders." arXiv preprint arXiv:2407.14435 (2024). Supplementary Material: The supplementary materials are complete, meaningful and very informative. Relation To Broader Scientific Literature: SAE approaches are well situated within existing literature; however, alternative concept extraction methods such as Convex-NMF, semi-NMF, Sparse PCA, ICA, SVD, and KMeans, known to perform well on similar tasks, are notably absent from comparisons. Essential References Not Discussed: I did not find any important reference missing Other Strengths And Weaknesses: Strengths: * Clear and well-justified approach. Weaknesses: * Limited comparative scope, omitting modern SAEs (e.g., JumpReLU) and alternative concept extraction methods. * Lack of consideration for stability of SAE-derived concepts across training runs, a critical aspect for interpretability. Other Comments Or Suggestions: * It would be valuable to discuss or quantify the stability of concepts learned by MSAE, as stability significantly affects interpretability and practical utility in concept extraction tasks. * Discuss more explicitly why the UW variant consistently outperforms RW on semantic metrics despite having lower sparsity, questioning the suitability of sparsity/reconstruction as the primary evaluation criterion. Questions For Authors: * Could you include metrics or analyses to quantify the stability of the concepts learned by MSAE across different random initializations? * Can you elaborate on why UW outperforms RW on all tested semantic preservation metrics, and discuss whether sparsity/reconstruction is indeed the best criterion for concept extraction performance? * Why were alternative concept extraction methods (e.g., Convex-NMF, Sparse PCA, ICA) not included in your comparative analysis? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We gratefully thank the reviewer for a thorough and very insightful review of our work.** Due to space limits (5000 characters), we concisely respond to each of the major points raised. We share tables and figures with results from the requested analyses in anonymous cloud storage at https://drive.google.com/drive/folders/11OOSpexmU5ul8nBhJqpaf65Dh_eqBlZY A. **Reconstruction/sparsity tradeoff comparison.** (Mentioned in: Methods & Eval. 1, Exp. Designs 1) > The authors are comparing the different trade-off for topK and the ReLU SAE with the MSAE, by sampling only 3 points (i.e. different lambda of K). [...] I would urge the authors to rerun their comparison by systematically considering more lambda or K. Our initial experiments indeed span 4 different lambda and K hyperparameter values, as listed in Appendix B.1. We show the majority of results for 3 hyperparameter values due to observing low performance in the fourth one (lambda=0.01 & K=32, respectively). Following related work (Gao et al., 2024), we excluded the more extreme values of hyperparameters (e.g. K=512) since these lead to subpar results. As the reviewer suggested, **we now added the comparison for lambda=0.01 and K={32,512} to Tables 6 & 7** (see cloud link). We observe that lambda=0.01 gives worse results than MSAE, while K=512 collapses during training (FVU=0.3, CKNNA=0.07). B. **Stability of SAE-derived concepts across training runs.** (Methods & Eval. 2, Exp. Designs 4, Weakness 2, Other Comments 1, Question 1) > [...] Such a property strongly affects the reproducibility of the SAE and impairs their use as a widespread interpretability tool (see [1]). [...] But instability is just one example, there are plenty of interesting other metrics that would give a more complete comparison of the SAEs (see table 1, page 8 of [2], to have a non-exhaustive list of interesting metrics to evaluate SAEs). [1] "Sparse autoencoders trained on the same data learn different features." preprint arXiv:2501.16615 (2025). [2] "Archetypal SAE [...]" preprint arXiv:2502.12892 (2025). Could you include metrics or analyses to quantify the stability of the concepts learned by MSAE across different random initialization? Thank you for highlighting this work. **We politely note that neither of the mentioned papers was publicly available at the time of our paper’s submission.** The [2] preprint appeared over two weeks (18 Feb 2025) after the ICML submission deadline; similarly, the [1] preprint appeared a day before (29 Jan 2025). Not relating to literature that was unavailable at the time of submission should, in our opinion, neither be considered a weakness of our work nor affect the review score. Following the reviewer’s recommendation, **we now included an analysis regarding the stability of SAEs [1] for CLIP in the new Table 18** (see cloud link). We envision that applying the archetypal framework [2] to our proposed architecture would improve the stability of MSAE. C. **Comparison with JumpReLU.** (Exp. Designs 2, Weakness 1) > It would have been interesting to compare the MSAE to the jump ReLU SAE [1], as it is known to have a good reconstruction/sparsity tradeoff. [1] "Jumping ahead: Improving reconstruction fidelity with JumpReLU sparse autoencoders." preprint arXiv:2407.14435 (2024). **Note that the JumpReLU preprint has no official code implementation available.** During our initial work, we were unable to reproduce its results. We now used the unofficial implementation shared in the SAELens software to compare with MSAE (ImageNet-1k, CLIP ViT-L/14). In our preliminary experiments, the results of JumpReLU are the same as those of ReLU. Note that a similar issue with reproducibility has been publicly pointed out in “Interpretability evals case study” (Transformer Circuits Thread, Aug 2024). **Our result is visible in the updated Tables 6 & 7** (see cloud link). D. **Can you elaborate on why UW outperforms RW on all tested semantic preservation metrics, and discuss whether sparsity/reconstruction is indeed the best criterion for concept extraction performance?** (Comment 2, Question 2) In brief, RW is trained with an alpha parameter that values sparser representations more highly during training than UW, where alpha does not affect the loss. RW models learn to benefit from a sparser reconstruction, sacrificing optimal reconstruction in favor of sparsity, as indicated by lower FVU in UW. In our experiments, CKNNA and LP semantic metrics are correlated with reconstruction, while the quantity of valid concept neurons (Table 3) is correlated with sparsity. E. **Why were alternative concept extraction methods (e.g., Convex-NMF) not included in your comparative analysis?** (Exp. Designs 3, Q. 3) Thank you for bringing this work to our attention; we view such a comparison as a natural future work direction. We primarily focused on evaluating various SAE architectures for interpreting CLIP using multiple quantitative metrics; beyond LLM applications. --- Rebuttal Comment 1.1: Comment: I agree that the listed were not released at the time of the submission, but these papers are just examples (among others that are older) to show the authors that plenty of other metrics could be used to meaningfully compare their SAE to other (stability is just one of them). I will keep rating, because I still think comparing different SAE just based on reconstruction/sparsity is not enough. --- Reply to Comment 1.1.1: Comment: We go beyond reconstruction-sparsity evaluation protocols from related work (JumpReLU, TopK), incorporating additional meaningful metrics: **CKNNA** alignment (Table 1) measuring how SAE activations align with original CLIP representations and **valid concept count** (Table 3) that quantities interpretability. Our results show that valid concept count correlates with sparsity, indicating sparsity signals interpretability. CKNNA reveals unique insights about SAE activations, e.g. in Figures 16-17 that only Matryoshka transfers both coarse and fine-grained features across domains, especially at expansion rate 32. Other methods' fine-grained SAE activations appear domain-specific or are used only for reconstruction.
Summary: Sparse Autoencoders (SAEs) have been adopted to interpret CLIP’s feature representations, but the trade-off between reconstruction quality and sparsity has made it difficult to strike an ideal balance for interpretation. This paper proposes Matryoshka Sparse Autoencoder (MSAE), a hierarchical extension of SAEs, to analyze CLIP’s representation space in multiple granularities. By doing so, MSAE can extract more than 120 semantic concepts from CLIP’s embedding space and demonstrate its applicability in concept-based similarity matching (with controllable concept strength) and gender bias analysis for downstream tasks. ## Update after rebuttal I agree with the concern raised by other reviewers regarding the lack of quantitative comparisons. The authors' response does not seem to adequately address this issue. Therefore, I have lowered my score. Claims And Evidence: **Claim 1**: Existing SAE-based methods face limitations when interpreting CLIP’s multimodal representations. - Evidence: Prior studies indicate that simple L1 regularization can undervalue important features, while a strict TopK approach enforces overly rigid sparsity, imposing interpretability constraints. -> The authors cite some earlier works convincingly, making a clear and logical argument. **Claim 2**: MSAE achieves a superior balance between sparsity and reconstruction quality compared to existing SAE baselines. - Evidence: As the shown in Figure 2, MSAE consistently outperforms standard ReLU and TopK SAEs on EVR vs. L0. -> Experimental results show clear and convincing evidence. **Claim 3**: MSAE facilitates more robust interpretation of CLIP’s multimodal representation. - Evidence: By applying MSAE to CLIP’s embedding space, the authors discover more than 120 interpretable concepts (Table 3). -> Experimental results show clear and convincing evidence. Methods And Evaluation Criteria: - The proposed multi-TopK approach is well aligned with the goal of preserving hierarchical structure in CLIP embeddings. - The evaluation metrics effectively measure reconstruction fidelity. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: The authors highlight the original issue—finding a workable sparsity-fidelity trade-off—and thoroughly address it using multiple metrics. Their design effectively analyzes how MSAE balances reconstruction precision against high sparsity. Moreover, ablation studies illustrate the hierarchical advantage of MSAE over single-threshold SAEs. Supplementary Material: I have reviewed the additional experimental results in the Appendix. Relation To Broader Scientific Literature: This work stands at the intersection of mechanistic interpretability and concept-based explanations for large vision-language models. It also addresses limitations in prior SAEs, bridging techniques like L1-based ReLU and fixed TopK with a flexible hierarchical approach. Essential References Not Discussed: There appear to be no critical references missing from the paper's discussion. Other Strengths And Weaknesses: **Strengths**: - The paper systematically evaluates MSAE under multiple metrics, convincingly showing that MSAE attains a better sparsity-fidelity trade-off than existing approaches. - Through experiments in Section 5, the authors illustrate how MSAE enables meaningful interpretation of CLIP, including concept-based similarity matching and downstream bias analysis, demonstrating its real-world applicability. **Weaknesses**: - While Section 4 demonstrates the advantages of MSAE over standard SAEs, Section 5 primarily focuses on MSAE’s applications—showing concept extraction and bias analysis with CLIP. It would have been even stronger if the paper made explicit how previous SAEs would fail or underperform in these same tasks. Other Comments Or Suggestions: It might help to clarify in Section 5 exactly where and why alternative SAEs (e.g., single-threshold TopK) fall short for concept-based similarity or bias detection. This would further underscore MSAE’s value for real interpretability applications. Questions For Authors: * In Section 5.3, the authors train a single-layer classifier on CLIP embeddings to identify bias for the CelebA experiment. Couldn’t the text encoder in CLIP itself be used for classification without additional training? Why did you decide to train a new classifier for this bias detection step? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for acknowledging the quality and significance of our work.** > While Section 4 demonstrates the advantages of MSAE over standard SAEs, Section 5 primarily focuses on MSAE’s applications—showing concept extraction and bias analysis with CLIP. It would have been even stronger if the paper made explicit how previous SAEs would fail or underperform in these same tasks. [...] It might help to clarify in Section 5 exactly where and why alternative SAEs (e.g., single-threshold TopK) fall short for concept-based similarity or bias detection. This would further underscore MSAE’s value for real interpretability applications. Thank you for raising this point. We agree with the reviewer that a measurement of interpretability would further strengthen our paper and view it as a natural future work direction. To the best of our knowledge, **our work is the first to evaluate well-established SAE architectures using multiple quantitative metrics beyond LLM applications**. Given this scope, there is a limit to how many new contributions a single paper can explore exhaustively (new architecture, bi-modal setting, state-of-the-art performance, potential applications, and also new evaluation protocol). To facilitate research into this direction, we supplement our work with additional visual examples akin to Figure 10. **We now share figures with explanations of 502 concepts across 6 different SAEs (appearing in Table 2) in anonymous cloud storage at** https://drive.google.com/drive/folders/11OOSpexmU5ul8nBhJqpaf65Dh_eqBlZY (files `concept_visualization_*.pdf`). Each figure presents inputs the most activating a given concept across both modalities (vision and language). For a broad overview, we visualize all validated concepts (see a discussion in Appendix A) appearing in any of these 6 models (see the last column in Table 3). > In Section 5.3, the authors train a single-layer classifier on CLIP embeddings to identify bias for the CelebA experiment. Couldn’t the text encoder in CLIP itself be used for classification without additional training? Why did you decide to train a new classifier for this bias detection step? Yes, the CLIP's text encoder can generally be used for zero-shot classification. However, we chose to fine-tune a classifier on top of CLIP to demonstrate the broader applicability of MSAE to interpret other models. For example, another classifier trained on the CLIP’s representation, or even another feature extractor (backbone) model that lacks zero-shot classification capabilities, unlike CLIP. We use MSAE to generate counterfactual explanations, which are particularly valuable in bias-sensitive predictive tasks like CelebA.
Summary: This article explains the CLIP model from the perspective of model parameters. The author uses sparse autoencoders to sparse the content learned by the neurons of the CLIP model. Specifically, the author proposes Matryoshka Sparse Autoencoder, which is a hierarchical encoder used to hierarchicalize conceptual information when training SAE. The author has done a lot of applications to prove the practicality of this method. ## update after rebuttal Claims And Evidence: The author mentioned in the introduction that the previous SAE method was limited by Top-K and L1 losses, and the hierarchical learning proposed in this paper can overcome these limitations. Although the key issues are mentioned, it seems unclear how the method in this paper solves these problems. Methods And Evaluation Criteria: The authors adopted the evaluation metrics commonly used in previous work to evaluate SAE. Theoretical Claims: No theoretical analysis in the article, if I miss it, please remind me. Experimental Designs Or Analyses: The authors designed many downstream applications to demonstrate the practicality of the proposed method. Supplementary Material: The appendix offers more experiments. Relation To Broader Scientific Literature: The research in this article helps promote the development of explainable AI, especially parameter-level explanations. Essential References Not Discussed: Not sure Other Strengths And Weaknesses: **Strengths:** It is very meaningful to study the use of sparse autoencoders to explain model parameters. The author proposed hierarchical learning to distribute the concept learning of sparse autoencoders from coarse-grained to fine-grained, and demonstrated the practicability of the method in multiple downstream tasks. **Weaknesses:** - The authors seem to lack quantitative comparisons with some existing SAE-based neuron interpretability methods, only some simple ReLu and TopK-based methods. - Similarly, although the method is practical in downstream tasks, it is difficult to intuitively feel the degree to which it surpasses advanced methods. I suggest that the author consider supplementing the latest comparison method, and then try to explain the same parameters of the same model, and then conduct some applications to help us intuitively feel the difference in the ability of different SAE-based methods to explain parameters. - The author explains CLIP throughout the paper, but the method in this paper does not seem to be a method for explaining the CLIP model specifically, and it can still be applied to other single-modal models. The author needs to clarify the relationship between the method in this paper and CLIP. Is it a specific design, or can it be universal? (If it is universal, it is recommended that the author can add some experiments for interpreting other model parameters). Other Comments Or Suggestions: NA Questions For Authors: NA Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We gratefully thank the reviewer for their engagement with our work and appreciation of our contribution.** > The authors seem to lack quantitative comparisons with some existing SAE-based neuron interpretability methods, only some simple ReLu and TopK-based methods. Existing SAE-based methods have been primarily compared in interpreting language models, **while our work goes beyond to specifically interpret vision–language models instead**. Therefore, we choose ReLU and TopK as the potential baseline methods. As suggested by Reviewer #T2kP, we now attempted to compare with JumpReLU SAE. Unfortunately, the JumpReLU preprint has no official code implementation available. We now used the unofficial implementation shared in the SAELens software to compare with MSAE (ImageNet-1k, CLIP ViT-L/14). In our preliminary experiments, the results of JumpReLU are virtually the same as those of ReLU. Note that a similar issue with reproducibility has been publicly pointed out in “Interpretability evals case study” (Transformer Circuits Thread, August 2024). We now share our result in the updated Tables 6 & 7 in anonymous cloud storage at https://drive.google.com/drive/folders/11OOSpexmU5ul8nBhJqpaf65Dh_eqBlZY (file `a_updated_tables_6_and_7.pdf`). We initially also considered comparing with Gated SAE, but acknowledged that TopK outperformed it based on the results from (Gao et al., 2024). > Similarly, although the method is practical in downstream tasks, it is difficult to intuitively feel the degree to which it surpasses advanced methods. I suggest that the author consider supplementing the latest comparison method, and then try to explain the same parameters of the same model, and then conduct some applications to help us intuitively feel the difference in the ability of different SAE-based methods to explain parameters. Thank you for highlighting the practical utility of our method. Kindly, it is unclear to us which “advanced methods” were meant by the reviewer. We agree it is challenging to provide an intuitive feeling regarding how different SAE architectures explain the model. Thus, **we rely on 5 quantitative metrics established in the literature, and introduce 2 novel metrics (CKNNA and DO) to assess MSAE against baselines**, an evaluation previously unexplored for CLIP (Section 4). We then demonstrate the best-performing SAE's utility in three downstream tasks (Section 5), providing visual explanations to further illustrate its effectiveness (Appendix Figures 18–23). To address the reviewer’s suggestion, we supplement our work with additional visual examples akin to Figure 10. **We now share figures with explanations of 502 concepts across 6 different SAEs (appearing in Table 2) in anonymous cloud storage at** https://drive.google.com/drive/folders/11OOSpexmU5ul8nBhJqpaf65Dh_eqBlZY (files `concept_visualization_*.pdf`) to facilitate building up this intuitive feeling. Each figure presents inputs the most activating a given concept across both modalities (vision and language). For a broad overview, we visualize all validated concepts (see a discussion in Appendix A) appearing in any of these 6 models (see the last column in Table 3). > The author explains CLIP throughout the paper, but the method in this paper does not seem to be a method for explaining the CLIP model specifically, and it can still be applied to other single-modal models. The author needs to clarify the relationship between the method in this paper and CLIP. Is it a specific design, or can it be universal? (If it is universal, it is recommended that the author can add some experiments for interpreting other model parameters). This is a good point. While we demonstrate our method on the multi-modal CLIP model for broader appeal (encompassing both vision and language), our approach is indeed applicable to single-modal models as well. We chose CLIP due to its widespread interest and the potential for insightful interpretations across modalities. Unlike much of the previous work, which focused on language models, we apply SAE to interpret vision–language models often used as feature extractors. Application of SAE to the final representation of the feature extractor like CLIP avoids the issue of concept re-emergence after steering, which remains a challenge in LLM applications. Note that our steering presented in Sections 5.2 & 5.3 uses the CLIP’s final representations, circumventing this problem. We appreciate the reviewer’s feedback and will incorporate the discussion in the next version of the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response, and after final consideration, I have decided to keep my score.
null
null
null
null
null
null
null
null
Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems
Accept (poster)
Summary: This paper introduces COWPOX, a novel defense approach designed to enhance the robustness of multi-agent systems (MAS) against adversarial attacks. Vision-Language Model (VLM)-based agents, which perceive and interact with their environment through vision and language, are integral to MAS. However, existing MAS designs often overlook robustness, allowing exploits to spread and compromise system integrity. COWPOX addresses this by implementing a distributed mechanism that limits the expected number of infections and improves agent recovery rates. The core innovation is the generation and distribution of a special cure sample that immunizes agents pre-exposure and aids in the recovery of infected ones. The paper empirically demonstrates COWPOX's effectiveness and provides theoretical robustness guarantees. ## update after rebuttal I have reviewed the authors' response. They mentioned not over-relying on llm-judge and supplemented the experiments with a new agent architecture, which addressed my concerns. Therefore, I maintain my positive score. Claims And Evidence: Well supported. Methods And Evaluation Criteria: Make sense. Theoretical Claims: No Experimental Designs Or Analyses: Appears to be a comprehensive experiment. No further questions. Supplementary Material: Noy yet. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Not at all, to my knowledge, AgentSmith is indeed the first article to address the attack (ICML 2024), and I believe this is a defense article that closely follows the SOTA. Other Strengths And Weaknesses: Overall, I believe this article is well done. From a methodological design perspective, I find it reasonable, and the presentation of images and formulas appears to be meticulously polished. However, I am not familiar with RAG and propagation-based jailbreaking, so I am unable to make a judgment on the technical details of the article. I would prefer to leave the assessment of the article to other reviewers and the Area Chair AC. Weakness: 1. The design of multi-agent systems is still in the exploratory phase, which may give rise to multiple schools of thought, yet the authors have only discussed one. In other words, the generalizability of the proposed defense mechanism may be questionable. Other Comments Or Suggestions: N/A Questions For Authors: 1.Does cowpox reduce the performance of the system when all agents are clean? In other words, will the false positive examples from Output Analysis Module affect the system? I am quite curious about some specific examples of the-LLM-as-a-inspector. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We'd like to appreciate your praise on the completeness of the paper and the design of our method. Hope our response can solve your concerns. > ***1. About the LLM-based inspector*** We agree that the inspector is important to our method. We give the details of the performance of the inspector used in the paper below: ||ACC(%)|FPR(%)|FNR(%)| |-|-|-|-| |1 try|84.7|2.8|12.5| |3 tries|89.1|7.9|3.0| - Settings: The evaluation is conducted on a combination of malicious outputs from Advbench and normal(benign) outputs from the ordinary chat history of our agents. We test the inspector in 1-turn and 3-turn settings. For the multi-turn test, a sample is labeled as harmful if it is classified as harmful in **ANY** turn. - Analysis: Overall, although not able to achieve 100% accurate detection, **our inspector is effective enough for Cowpox**. This is because: - **Only very few benign samples will be misclassified.** Cowpox agents account for only a very small portion of the total, so less than 2% of benign samples will be misclassified. - **Misclassifying benign samples has little impact on the system.** The cure crafted based on misclassified benign samples still perfectly contains its original information. So there will be almost no effects. - **FNR approaches 0 when the test round increases.** The system only needs the virus to be detected in any chat round once to achieve effective defense. Cowpox does not require a very strong inspector. > ***2. The generalizability of the proposed defense mechanism may be questionable.*** **The generalizability can be demonstrated by the following two experiments.** We modify the original system to simulate diverse systems. - **Effective in the system of different structured communication settings.** - Settings: We assume that the system is hierarchical. The agents are divided into 8 groups, and can only do the pair-wise chat with their group members. In each group, there are 3 manager agents, which communicate with manager agents of other groups every 4 rounds. Such hierarchical structures are often adopted in works like AutoGen [[1]](https://arxiv.org/abs/2308.08155). - Results: [link](https://imgur.com/2gN812H). - Analysis: We can see that the virus infects agents more slowly, and so does the Cowpox cure the agent. This is because the hierarchical structure can temporarily isolate the infected agents. - **Effective across diverse underlying base models and RAG systems** - Settings: We adopt 2 VLM models (LlaVA-1.5, InstructBLIP) and 2 RAG encoders (CLIP, DINO V2) in the experiment. The agent chooses its base model and RAG encoder *randomly* initially to form a multi-agent that consists of heterogeneous agents. - Results: [link](https://imgur.com/LOQiZ0R). - Analysis: From the figure, The virus in this system performs worse, while Cowpox is almost equally effective. This is because the cure only targets the RAG system, therefore fewer models are involved in crafting it, making the optimization easier. --- > ***Reference*** 1. Wu, Qingyun, et al. "Autogen: Enabling next-gen llm applications via multi-agent conversation." arXiv preprint arXiv:2308.08155 (2023). --- Rebuttal Comment 1.1: Comment: I have reviewed the authors' response. They mentioned not over-relying on llm-judge and supplemented the experiments with a new agent architecture, which addressed my concerns. Therefore, I maintain my score and recommend the paper. --- Reply to Comment 1.1.1: Comment: We respectfully appreciate your recommendation and your decision to maintain a positive attitude towards our work.
Summary: This paper targets the problem that there are attack agents in a vision-language-model-based multi-agent systems. The authors propose a defense method named Cowpox. It generate and distribute cure samples, which will be scored higher in the retrieval-augmented generation and can help those injected agents. Experiments show the effectiveness of the proposed method. ## update after rebuttal My major concern is on the limited generality of the targeted attack scenarios and proposed defense method. While the authors tried to address this by showing more cases, it seems like the content during rebuttal somehow deviates from what the initial manuscript looks like. I choose to maintain my current rating and suggest the authors to entirely rewrite the paper to make the attack and defense generalizable to diverse scenarios rather than targetting only one scenario throughout the paper. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: unclear Essential References Not Discussed: no Other Strengths And Weaknesses: ## Strengths 1. The paper is in a good structure and is easy to follow 2. The effectiveness of the proposed method is supported by sufficient experimental results. ## Weaknesses 1. The foremost concern for me is motivation and the practicality of the targeting scenario. In practice, multi-agent systems are usually applied to solve complex tasks or to simulate real-world scientific experiments. In these application scenarios, one would control all of the agents to achieve the goals and there seems to be no chance that an attack agent will be involved, for example, one may uses open-sourced well-performed LLMs released by Meta or Qwen. And these models will not diliberately attack the system. That is, if there is no such attack in practice, it is unclear for me what such research (this paper) is necessary. 2. Meanwhile, the proposed defense method is too specific. It is designed on the system proposed by Gu et. al., 2024 and verified on this system. This makes the scope of this paper narrow. Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, which we believe will improve the paper. We appreciate that you liked the structure of our paper and our method’s effectiveness. Please find our responses below. > ***1. The practicality of the targeting scenarios*** We agree with your comments that currently, the user has full access to all the agents is a common setting in many well-known multi-agent systems. However, this setting does not apply to all of the multi-agent systems. Our scenario is practical, especially in the following scenarios. - **Multi-embodied-agent system.** Pioneering works like [Co-LLM-Agent](https://github.com/UMass-Embodied-AGI/CoELA) create systems with multiple embodied agents operating in the physical world. In this scenario, an attacker can introduce the attack agent by simply putting it somewhere close to the benign agents to infect the system. - **Multi-agent system on edge devices.** There are works [2-3] investigating distributive multi-agent systems. In these systems, the agents are implemented on different edge devices like mobile phones or vehicles, where it is hard for the user to access all of the agents, making it possible for a malicious agent to get involved. - **Blockchain-based decentralized multi-agent systems.** Systems like [SingularityNET](https://singularitynet.io), [Fetch.AI](https://www.fetch.ai), and [HyperCycle](https://www.hypercycle.ai) **allow the agents owned by different users to collaborate, share data, and learn from each other anonymously, which further realizes the attack surface.** An attack started by a malicious agent can be easily conducted in these scenarios, and distributive countermeasures like Cowpox seem to be the only solution. - Finally, **The threat of the infectious attack could come NOT from the agents, but from the working environment of the system.** Even in a situation where the user has full access, A virus in the environment can always turn an originally benign agent into an attack agent by infecting it in the wild. For example, multi-agent systems like [Camel](https://www.camel-ai.org) and [BabyAGI](https://github.com/yoheinakajima/babyagi) allow the agents in the system to access the Internet, where the virus sample might be accidentally obtained by the agent and infect it, turning this unlucky agent into an attacker. The attack can also be achieved by other approaches such as poisoning the RAG database of certain agents. > ***2. The proposed defense method is too specific*** We fully understand your concerns about the generalizability of Cowpox. We'd like to claim that our method is a general defense strategy against infectious attacks. - **Our method is designed to deal with an attack paradigm instead of a specific attack.** The core idea of Cowpox is **to isolate the suspicious sample from the agent by preventing it from being retrieved.** This indicates that our method is able to interrupt the transmission process of the infectious attack in ANY multi-agent system with RAG. - **The alignment in the tested system helps us better evaluate our defense.** The attack performance of Gu et. al. is perfectly demonstrated in their paper. **We mainly test our defense on their system to fairly show its effectiveness.** To further demonstrate the generalizability, we investigate two different multi-agent systems to show that Cowpox can work in diverse settings: - **Effective in the system of different structured communication settings.** - Settings: We assume that the system is hierarchical. The agents are divided into 8 groups, and can only do the pair-wise chat with their group members. In each group, there are 3 manager agents, which communicate with manager agents of other groups every 4 rounds. Such hierarchical structures are often adopted in works like AutoGen [[1]](https://arxiv.org/abs/2308.08155). - Results: [link](https://imgur.com/2gN812H). - Analysis: We can see that the virus infects agents more slowly, and so does the Cowpox cure the agent. This is because the hierarchical structure can temporarily isolate the infected agents. - **Effective across diverse underlying base models and RAG systems** - Settings: We adopt 2 VLM models (LlaVA-1.5, InstructBLIP) and 2 RAG encoders (CLIP, DINO V2) in the experiment. The agent chooses its base model and RAG encoder *randomly* initially to form a multi-agent that consists of heterogeneous agents. - Results: [link](https://imgur.com/LOQiZ0R). - Analysis: From the figure, The virus in this system performs worse, while Cowpox is almost equally effective. This is because the cure only targets the RAG system, therefore fewer models are involved in crafting it, making the optimization easier. # References 2. Zhang, Chi, et al. "Appagent: Multimodal agents as smartphone users." arXiv preprint arXiv:2312.13771 (2023). 3. Wang, Junyang, et al. "Mobile-agent: Autonomous multi-modal mobile device agent with visual perception." arXiv preprint arXiv:2401.16158 (2024).
Summary: This paper addresses the vulnerability of Vision Language Model (VLM)-based multi-agent systems to infectious jailbreak attacks, where a compromised agent can spread malicious content to other agents, undermining the system's robustness. The paper proposes a novel defense mechanism called COWPOX, which aims to provide immunity to such systems. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The proposition 4.1 is checked. Experimental Designs Or Analyses: yes Supplementary Material: I have checked the code. Relation To Broader Scientific Literature: The paper builds upon existing research on jailbreak attacks against VLMs , particularly infectious jailbreak attacks in multi-agent systems . It differentiates itself by proposing a defense mechanism tailored to multi-agent systems, addressing the limitations of individual model defenses. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths Novelty: The paper tackles a critical and relatively unexplored security vulnerability in VLM-based multi-agent systems. Well-Defined Problem: Clearly articulates the problem of infectious jailbreak attacks and its implications for system robustness. Comprehensive Approach: Combines a practical defense mechanism (COWPOX) with theoretical analysis and empirical validation. Solid Empirical Results: The experimental section provides evidence for the effectiveness of COWPOX under different attack strategies and defender abilities. Clear Presentation: The paper is generally well-written and organized, with clear explanations of the proposed method and experimental setup. Addresses Practical Constraints: Acknowledges and addresses real-world constraints such as the limited control a defender might have over the agents in the system. Weaknesses Scalability Concerns: While the paper addresses the number of agents, further discussion on the computational overhead of the LLM-based inspection (Output Analysis Module) in large-scale systems would be beneficial. How does the complexity of this module scale with the number of agents or the size of the chat histories? Adaptive Attack Complexity: While the paper mentions resistance to adaptive attacks, the complexity of the adaptive attack used in the evaluation could be further elaborated. Are there more sophisticated adaptive strategies that could potentially circumvent COWPOX? Generalizability: The experiments are conducted with a specific VLM (Llava-1.5-7B) and a specific multi-agent system setup. It would be useful to discuss the potential generalizability of COWPOX to other VLM architectures and multi-agent system designs. Other Comments Or Suggestions: no Questions For Authors: see strength and weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate that you liked the novelty of our paper and all the other strengths stated. Please find our responses below. > ***1. Scalability Concerns*** **The computational overhead of the LLM-based inspector is linearly related to the chat rounds, and the numbers of the cowpox agent, and is not related to the length of the chat history**, as it only examines the output in the current chat round in our design. A more capable inspector that also goes through the chat history can be implemented though in case of a more sophisticated attack. We will discuss this issue in the revised paper. > ***2. Adaptive Attack Complexity*** The adaptive attack discussed in the paper might seem simple, but it is exclusively designed for Cowpox. Below we rationale our claim: - **The fundamental mechanism of Cowpox.** The cure samples crafted by Cowpox eliminate the virus sample by gaining a higher RAG score. This means the virus cannot be retrieved from the database of the agent so as to interrupt the transmission. - **The adaptive attack targets the fundamental mechanism of Cowpox.** To target this mechanism, the adaptive attack tries to craft a sample that could score even higher than the cure sample. This is guaranteed by comparing the average RAG score of the cure sample and that of the virus, and stopping the optimization until the RAG score of the adaptive virus exceeds that of the cure sample. **We considered a more sophisticated version of the adaptive attack**, in conclusion, **Cowpox in this case is still effective.** - Settings: we adopted meta-learning to make the jailbreak target more resilient to the recovery process of Strategy ① discussed in the paper. The adaptive virus is crafted by a dual optimization process: - The inner loop simulates the process of Strategy ①. - The outer loop maximizes the RAG score and minimizes the loss between the VLM output and the target outputs. - Results: [link](https://imgur.com/PossSSf) - Analysis: We find that the so-crafted adaptive virus tends to be less effective, as demonstrated in the provided link. On the other hand, it takes more rounds for the cure sample to recover all the agents, indicating the effectiveness of the meta-learning. Lastly, it is always possible that more sophisticated, adaptive attacks will be developed in the future that circumvents our defense and would be interesting to study as a follow-up to our work. > ***3. Generalizability*** Thank you for your valuable advice. We conduct experiments on agents based on different VLM and RAG systems. **The effectiveness holds when heterogeneous agents coexist in the same system**, - Settings: we adopt 2 VLM models (LlaVA-1.5, InstructBLIP) and 2 RAG encoders (CLIP, DINO V2) in the experiment. The agent chooses its base model and RAG encoder *randomly* initially to form a multi-agent that consists of heterogeneous agents. - Results: [link](https://imgur.com/LOQiZ0R). - Analysis: From the figure, The virus in this system performs worse, while Cowpox is almost equally effective. This is because the cure only targets the RAG system, therefore fewer models are involved in crafting it, making the optimization easier. --- Rebuttal Comment 1.1: Comment: Thanks for author's response. I keep my rate unchanged. --- Reply to Comment 1.1.1: Comment: We respectfully appreciate your valuable comments and response.
Summary: The paper presents Cowpox, a method to prevent infectious jailbreaks in multi-agent systems of VLMs. A single agent can start out with an adversarial example that can affect other agents in the system, and Cowpox provides a method to override this adversarial example with a small number of agents part of the defense mechanism. This reduces the number of ‘infected’ agents in the system. Claims And Evidence: Most claims are justified but some claims around theoretical robustness guarantees depend on a large number of agents and rounds, stable RAG scores, and other assumptions. Depends heavily on the inspector prompt and doesn’t provide any analysis on false negative rates in this context. Methods And Evaluation Criteria: The defenses used are consistent with infectious jailbreaks in multi-agent systems. Most experiments use the Llava 1.5 7B model with CLIP RAG on toy tasks. The inspector prompt approach does not come with an analysis of its effectiveness. The adaptive adversary setting has limited evaluation. Theoretical Claims: Proofs under the assumptions (Proposition 4.1 and 5.1) have been checked and appear valid. Experimental Designs Or Analyses: All agents use the same model and CLIP RAG setup, which does not reflect real world multi-agent systems. Moreover, a lot of reliance is placed on CLIP or BLEU scores for evaluating outputs, but this may not accurately represent system degradation. Moreover, experiments assume a random communication structure which limits real-world relevance significantly. The only adaptation tested is a single new virus, which is too simplistic for the claims made around resistance against evolving threats. Oversimplification and a narrow scope of evaluations remains an important issue. Supplementary Material: No Relation To Broader Scientific Literature: The paper highlights how security measures in multi-agent literature are underexplored. It focuses on a specific aspect, infectious jailbreak attacks based on external RAG systems accessed by all agents in the system (presented in “Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast”). It also relies on literature around stability analysis and simulation based evaluation for defining the problem clearly. However, the scope and impact of this setting (i.e. attack surface) remains limited in academic literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths The paper presents a novel way to convert positive feedback loops in infectious jailbreak attacks into negative ones with the use of only a few agents as part of the defense. The focus on system level security is relevant and important to where the field is headed. The theory and experiments, despite their limited real-world implications, are important and the defense mechanism aligns with constraints that would be seen in real multi-agent systems. Weaknesses The paper’s scope is limited to the exact same underlying models interacting in a random communication pattern. Evaluations against adaptive attacks are limited. It is important to evaluate settings that resemble real-world use cases: i.e. agents with different underlying models and goals, structured communication networks, and adaptive attacks. The LLM inspector used is not tested and evaluated in detail. It appears as a black-box in the paper despite its importance to the experimental results. Other Comments Or Suggestions: Clarify the exact problem in the abstract and introduction, along with what the paper does. Too much of the paper is spent on discussing ‘cures’ and ‘infections’ without making the attack surface and specific method very clear. Focus on how such attacks and defenses can appear in the real world while providing concrete directions for future research. Questions For Authors: Could the authors provide a concrete quantitative evaluation of your LLM based inspector prompt for detecting malicious outputs (along with false positive/negative rates) Could the authors simulate a small scenario where an attacker can create multiple viruses or adaptively optimize them over several rounds? Are there any simpler baseline level defenses that the authors can test against? Does the effectiveness hold against agents with diverse underlying base models and RAG systems? How can the method be extended to more realistic, structured communication settings? ## update after rebuttal I thank the authors for addressing my questions. I believe the paper has improved and have increased my score accordingly. Please note that there seems to be some related work that should be cited: "Multi-Agent Security Tax: Trading Off Security and Collaboration Capabilities in Multi-Agent Systems", Peigne-Lefebvre et al., https://arxiv.org/abs/2502.19145 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review of our paper and thoughtful comments. We hope the following responses will address your concerns. > ***1. About the Inspector:*** We agree that the inspector is important to our method. The details of the performance of the inspector used in the paper is as below: ||ACC(%)|FPR(%)|FNR(%)| |-|-|-|-| |1 try|84.7|2.8|12.5| |3 tries|89.1|7.9|3.0| - Settings: The evaluation is conducted on a combination of malicious outputs from Advbench and normal(benign) outputs from the ordinary chat history of our agents. We test the inspector in 1-turn and 3-turn settings. For the multi-turn test, a sample is labeled as harmful if it is classified as harmful in ANY turn. - Analysis: Overall, although not able to achieve 100% accurate detection, **our inspector is effective enough for Cowpox**. This is because: - **Very few benign samples will be misclassified.** There are very few Cowpox agents in the system, so less than 2% of benign samples will be misclassified. - **Misclassifying benign samples has little impact on the system.** The cure crafted based on misclassified benign samples still perfectly contains its original information. So there will be almost no effects. - **FNR approaches 0 when the test round increases.** The system only needs the virus to be detected in any chat round ONCE to achieve effective defense. Cowpox does not require a very strong inspector > ***2.1 Multiple Viruses*** **Cowpox works well in the multiple virus scene** - Settings: We crafted 10 different viruses carried by random agents initially. They are based on different image and malicious outputs. - [Results](https://imgur.com/kCxmQqV). - Analysis: We can see there is also competition between the virus. All the infectious rate curve approaches 0 at the end of the simulation > ***2.2 Adaptively Optimized Samples Overrounds:*** **Cowpox makes the system increasingly robust to similar viruses.** - Settings: The attacker continually collects the cure sample for the previous virus and optimizes the new virus, making it have a higher RAG score. - Results: Below showed the peak infected rate of viruses of each version. |Virus Version|0|1|2|3| |-|-|-|-|-| |Max Infected Rate|0.77|0.26|0.18|0.14| - Analysis: When the attacker continues to adapt craft the virus, it becomes harder and harder for the virus to gain a higher RAG score, according to Proposition 5.1. Therefore we can see a declining Peak infected rate. > ***3: About simple baseline*** We acknowledge the importance of a good baseline. However, we think **it is difficult to compare any existing baseline with our work fairly, currently.** - **The infectious attack is very pioneering at present.** As the first system-level defense against such attack, there is no similar work that we can compare with. - **The existing defenses are only at the model level.** They only provide protection to single agents, regardless of the rest agents in the system. We must assume the defender has access to every agent to achieve reasonable performance. While cowpox works in circumstances where the access to the agents is very limited. These model-level defenses are completely ineffective in these settings. > ***4: Effectiveness in various structured communication settings.*** **Cowpox is effective in various structured communication settings.** - Settings: We assume that the system is hierarchical. The agents are divided into 8 groups, and can only do the pair-wise chat with their group members. In each group, there are 3 manager agents, which communicate with manager agents of other groups every 4 rounds. Such hierarchical structures are often adopted in works like [AutoGen](https://arxiv.org/abs/2308.08155). - [Results](https://imgur.com/2gN812H). - Analysis: We can see that the virus infects agents more slowly, and so does the Cowpox cure the agent. This is because the hierarchical structure can temporarily isolate the infected agents. > ***5: Diverse base models and RAG systems*** **The effectiveness holds when heterogeneous agents coexist in the same system**, - Settings: we adopt 2 VLM models (LlaVA-1.5, InstructBLIP) and 2 RAG encoders (CLIP, DINO V2) in the experiment. The agent chooses its base model and RAG encoder *randomly* initially to form a multi-agent that consists of heterogeneous agents. - [Results](https://imgur.com/LOQiZ0R). - Analysis: From the figure, The virus in this system performs worse, while Cowpox is almost equally effective. This is because the cure only targets the RAG system, therefore fewer models are involved in crafting it, making the optimization easier. > ***6: About the CLIP and BLEU score*** **CLIP score and BLEU score are used to measure how well the original virus samples can be restored to normal samples by Strategy ①, rather than the system degradation.** The CLIP score and BLEU measure how far the output deviates from the benign one. A virus naturally has small scores as its outputs are manipulated.
null
null
null
null
null
null
Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training
Accept (poster)
Summary: This paper addresses the previously identified problem of bounded linear pieces in neural networks regardless of depth. The authors show that for a width 4 network, with a novel initialization and pertaining algorithm, they can maintain an exponential number of linear pieces with regard to the depth. They show that with this initialization and pretraining, followed by standard training, a toy net can achieve superior performances on a toy problem. They then discuss the extension to general networks, by dividing the general network into many toy networks and apply their method on each of them. Claims And Evidence: The claims made in the main text are clear, good, and interesting. In particular, these main claims are significant to solving the problem identified by previous works. However, I found the claim made in Appendix A.11, which is arguably the most interesting one to the general audience, is a bit confusing. Methods And Evaluation Criteria: The method has solid mathematical background and make sense. Evaluation is sufficient to demonstrate their claims made in the main text. Theoretical Claims: Due to the large body of theoretical results and their complexity, I am unable to check their formal proof. However, the authors provided numerical demonstrations which agrees well with their theoretical results. Thus, I am inclined to believe their results. Experimental Designs Or Analyses: The experiments involving the toy dataset is good. However, I have a question regarding Table 1 and 2: why do the authors report minimum errors? I would say reporting the mean error ($L_2$ error) and max error ($L_\infty$ error) is more proper. After all, achieving a good minimum error sounds meaningless to the full picture. I find the experiments regarding CIFAR and Imagenet, discussed in Appendix A.11, quite confusing due to the lack of sufficient details. First, the authors report the loss, but do not mention the loss type. I can hardly believe that with a standard cross entropy, such a small network could achieve loss less than 0.01 within 2 epochs. I suggest the authors replace Figure 16 with accuracies rather than loss values. Second, the authors do not describe the network they used in this experiment and how they apply their method as the activation in detail. Third, contrary to the experiments in the main body, this section only applies their method as part of the activation function. Why can't we follow the same design made in the main text? Does this still count as the same method? Fourth, they evaluate a single small network architecture in these experiments. Do they have a specific reason about not using a larger network on CIFAR-10? Could the benefits of their method vanish when we further increase the network size? An ablation on architecture helps. I apologize for raising many questions about this appendix section. However, since this section presents the most relevant information about the practice, I do find it confusing and preliminary. In addition, this study deserves a place in the main text. While it is OK to not perform well on large datasets for this theoretical work, it is important to make it clear where and how large the gap is. Otherwise, follow-up works will be quite hard, which is also contrary to the authors' interest. Supplementary Material: The authors do not include a readme in the supplementary material. Thus, while I had a look on their network architecture and datasets, I did not run their code. Relation To Broader Scientific Literature: This work mainly solves the problem raised by [1]. [1] Hanin, B. and Rolnick, D. Deep ReLU networks have surprisingly few activation patterns. Essential References Not Discussed: I found the reference adequate. Other Strengths And Weaknesses: This work stands out in theoretical rigor and its sound evaluation on toy datasets. These evaluations support their construction quite well. However, as detailed above, when it comes to more relevant datasets, evaluations are flawed. Other Comments Or Suggestions: None. Questions For Authors: See Section ''Experimental Designs Or Analyses''. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their questions, and we appreciate that they took the time to read the appendix. The reason why the paper currently looks like it does, as Reviewer 1 (jsad) alludes to, is that the paper has been through several rounds of rebuttals, and so layers of preliminary experiments have built up over time and now the appendix is extensive. Originally, the authors’ main intention for the paper was to present the idea that better training algorithms could be created by finding efficient mathematical constructions that still permitted room for trainability. There are still many fundamental mathematical issues to be addressed to ‘properly’ set weights in higher dimensional settings, and this paper felt like it was at a natural stopping point even with only toy 1-d experiments. However, the reviewer makes a good point that it is valuable to know how far away the naive extension of this method to higher dimensions is from practical usage, and it likely makes sense to restructure the body of the paper to include both the one-dimensional experiments and the experiments from Section A.11. We will address this in our revised version. We will attempt to address the reviewer’s questions in order. In the one-dimensional experiments, the mean and minimum are not distances in function space, they’re the mean and minimum over 30 trials. The minimum is needed (and more important than the mean) because half the time the randomly initialized ReLU networks collapse from the dying ReLU issue. The minimum lets us see what happens when training ‘works’ for the various network types. We will clarify this in the manuscript. We agree that accuracy is probably a more interpretable and meaningful metric to display and can switch figure 16 accordingly. We do include a diagram of what using one of the small networks as an activation would look like at the beginning of the appendix. It could be beneficial, though, to describe the implementation further, especially if larger scale experiments are going to be moved to the main body. Essentially, each neuron in a dense layer gets its own copy of the 1-d experimental networks to learn some piecewise linear convex activation function for it to use. This adds layers to the network with a 4x4 block diagonal structure where each subnetwork’s version of that layer corresponds to one of the diagonal blocks. This can be implemented practically by turning the matrices into 4x4xW tensors (to remove all the 0 elements) and using pytorch’s einsum() function. The reason the method in the main body cannot apply directly to higher dimensional cases is that it only works for 1-d convex functions. Fortunately, activation functions are one-dimensional, and so that provides the most naive and easy route to scale up the construction. As we alluded to earlier, much more math is needed to figure out more elegant solutions. In light of your 4th question, it makes more sense to use a larger and better studied architecture such as VGG for the CIFAR-10 experiments, as it would make the results more compelling and less arbitrary. We can make that switch for the camera-ready version of the paper --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the rebuttal. It clears most of my concerns except one: It is mentioned that half of random initializations collapse due to "dying ReLU". However, for Kaiming initialization etc., one specific advantage is to empirically mitigate the dying ReLU. As a result, when you init with Kaiming uniform, you hardly get training collapses (personally I never observed collapses from thousands of re-init across hundreds of datasets with Kaiming init). Could you provide more details on what collapse means for the proposed init (empirical details), and why the proposed init leads to this problem? --- Reply to Comment 1.1.1: Comment: As far as the authors’ understanding goes, kaiming initialization was designed to avoid exponential increase/decay in the signal magnitude by normalizing for the layer widths. This is a slightly different issue than the ‘dying relu’ phenomenon. The ‘dying relu’ issue happens if every neuron in a layer initializes such that its output is negative over the whole dataset. The ReLU activation then zeros out the entire layer, severing the network. In that case, the network output is constant. Wider layers make dying relu less likely to happen since more neurons gives the network more chances to get a signal through at each layer. Deeper networks increase the odds of dying relu because more layers gives dying relu more chances to happen. The reason you’ve likely never encountered this issue before is that most practical networks are operating with architectures that are much wider than they are deep; the networks we explore are 4 neurons wide with arbitrary depth, so they are somewhat exceptional in their shape. The dying relu problem can be lessened by the RAAI initialization, which was designed with the issue in mind, but since RAAI is still a random procedure it is still possible even if less likely. One of the advantages of our method is that each layer is being explicitly forced to compute triangles, the dying relu problem won’t happen. This is partly related to why in appendix A.10, the block diagonal constraints on random networks ruin their performance, but the higher dimensional version of our method is fine since block diagonality is the natural structure that arises from using the 1-d subnetworks as activations. We will clarify this in our revised manuscript, and we welcome any further discussion with the reviewer. Thank you for your engagement.
Summary: This work builds on the link between the expressiveness of a neural network with ReLU activation functions and its number of linear regions in the output space. The general idea is that the larger the number of linear regions in the output space, the higher the expressiveness of the network (i.e., the easier it would be for the network to implement complex functions). Previous works show that the number of linear regions do not scale with depth for randomly initialized neural networks. The authors instead propose to initialize and (pre)train the model while constraining its weights such that the total network implements a chain of triangle functions, to make sure the number of linear regions scales exponentially with depth. They further illustrate through experiments that the proposed method achieves a higher accuracy than conventional models. Claims And Evidence: The actual claims made are dispersed throughout the paper and are somewhat vague. While I believe the claims might be true to a large extent, I think they are not sufficiently proven. Referring to following lines in the paper: *62, L: While an exponential increase in the expressiveness of a ReLU network does not necessarily imply an exponential increase in performance, one may intuitively expect a substantial benefit, and our results bear this out* *234, R: (1) we would like to determine how to learn the most effective function representations possible, and (2) to explore how the utilization of an increased number of linear regions can affect a network’s ability to capture underlying nonlinearity in its training data. To demonstrate that our networks can learn function representations that better utilize depth,* *355, R; in particular, compelling ReLU networks to approximate functions with exponential accuracy as network depth is linearly increased.* And assuming the following statements are correct: - randomly initialized networks have a number of linear regions that is not influenced by depth (the authors take this from literature), - for the proposed method, the number of linear regions exponentially increases with depth (this is true by design of the proposed method), I recognize the following claims, according to my understanding of the paper / the above mentioned lines: a) the proposed method results in more linear regions w.r.t conventional models of the same depth, b) and, as a main claim: for the same depth, the proposed method results in higher accuracy w.r.t conventional models because of the increase in the number of linear regions (assuming this is the case, see claim a). I think these claims are not sufficiently proven because: Claim (a) -> In the experiments, there is no count of the number of linear regions for the conventional models. This potentially depends on the depth used in the experiments; it is thus unclear whether the proposed method really has more linear regions at all depths. Claim (b) -> the authors make design choices that have effects similar to residual connections (line 218, R) and regularization (line 234, L). It is therefore unclear what the performance gains can be attributed to; i.e., if this is really because the number of linear regions is higher. Some part of the performance gain could be due to the (implicit) use of residual connections/regularization when these are not present in the standard networks the authors compare against. Since there are no details on the architectures of these ‘standard’ networks, I cannot further verify this claim. Methods And Evaluation Criteria: To prove that the proposed method leads to better accuracy, the used methods and evaluation seem sufficient for an exploratory paper. However, the methods do not sufficiently support the specific claims about why the method works better (as discussed above). Moreover, important details about the architecture of the standard models are missing. Theoretical Claims: There are some theoretical derivations in the appendix, which I did not check in detail. They do not support the main claims as I understood them (see above). Experimental Designs Or Analyses: See claims Supplementary Material: I quickly checked the supplemental material and it seems substantial and interesting, but I didn’t find any material that further supports the main claims Relation To Broader Scientific Literature: The paper is well situated in the broader literature of expressiveness of ReLU networks, including other works that constrain and/or reparameterize the network to increase the expressiveness ( ElbrÅNachter et al. (2019), Chen & Ge (2024)). The origin and usefulness of triangle functions is also expenstively discussed. There is little to no literature referred that studies the relationship between the expressiveness of a network (in the sense of its potential to express certain functions) and the actual functions it can/does express after training with gradient descent (cfr. works on inductive bias) and/or the achieved accuracy. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths: I think the proposed method is very interesting, and shows great promise from the preliminary experiments. The work done for the paper + supplemental seems substantial and could lead to an interesting contribution to the field. Weaknesses: - the paper is confusing in its claims and goals. Is the goal to show that the proposed method ‘just works better’? That would need more extensive experiments with more details on the models the work compares against, plus an extension to real-world settings. Is the goal to prove that the method introduces more linear regions for larger depths, and this in turn leads to higher accuracy? Then the experiments should be changed to unequivocally prove this. While I think the work is very interesting, the “basis” of the paper is lacking. - While great care is taken at some points to explain the setup, the text is also often confusing and unclear. To give a specific example, starting at line 216, it is unclear what the actual (mathematical) goal is here. I’m guessing the authors wish to implement the triangle functions through making use of a neural network consisting of two nodes with ReLU activations, constraining the weights and biases such that the [0,1] input is mapped according to a triangle function with a peak at a. However, this is described in a very convoluted and unclear way in the text. It would be helpful to have clear mathematical descriptions, and more concise statements to prepare/guide the reader. Other Comments Or Suggestions: This work is very interesting to me. But although it's also extensive, the "basis" of the paper in terms of claims and corresponding experiments is lacking. I would suggest revising this part. With an improved quality of the text and figures, I could be resubmitted. Questions For Authors: / Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad to hear that the reviewer found the paper interesting and believes it shows promise. We appreciate the reviewer’s willingness to reconsider their score in light of what we hope will clarify all the claims and goals of the paper. We agree that the extensiveness of the paper could cloud those, and we apologize for a lack of clarity - we hope we can provide a convincing preview here of how we plan to revise the paper to fix these writing issues. The goal of this paper is to present a method that forces ReLU networks to maintain exponentially many linear regions at initialization and during training, thus answering a five-year-old open question in the literature (Hanin and Rolnick, ICML 2019). We note that the goal is not to exponentially improve training speed, or similar goals - the general relationship between number of linear regions and training speed is unknown in the literature, and our paper does not attempt to address or solve this (though intuitively, more linear regions should usually help, and numerical results bear this out). The central claim of our paper (which is constructively proved) is that our triangle parameterization indeed achieves exponentially many linear regions, at least for a simple 1-d neural network case where we can rigorously analyze the effect of the triangle parameterization. We make some further, smaller claims about the importance (or not) of enforcing differentiability, and on extending the method to higher dimensions. Again, we hope this compactly summarizes what the paper is (and isn’t), and we will make this prominent (either text or bullet points) in the revised introduction of our paper. We will similarly revise the experiment descriptions and text to ensure their clarity for the camera-ready manuscript. In figures 5 and 7 the difference in the number of linear regions can be seen visually. In general, counting them exactly is combinatorially difficult (for example, appendix A.8 discusses Zaslavsky’s theorem - even for a shallow network, the number of regions grows to the power of the input dimension due to a relationship with Pascal’s triangle) and would likely be infeasible for many of the larger experiments. As mentioned in the response to reviewer 2 (a684), the triangle parameterization is not universal. The 1000-fold improvements are possible because the first phase of training can get the network into a neighborhood of parameter space where a solution that still uses all 2^n regions lives. But we would expect that with greater depths and demands for precision, that neighborhood size would shrink and this might not work for any curves that are not explicitly proven to be represented by weighted sums of composed triangle functions. Interestingly, this mirrors the developments of deep learning in general, where vanishing/exploding gradients and other such issues limited the depth of trainable models. Hopefully this work can serve as a starting point for constructing more expressive parameterizations. When we refer to networks as standard/ordinary/default/etc., we simply mean that there is nothing special about the network - it is a fully connected network of the same dimensions as our experimental models. This is still a fair comparison with our method even though some of the ‘jobs’ we give neurons might resemble residual connections or other empirical techniques because the standard networks have every opportunity to learn a residual neuron of their own, yet they do not. Our method comes entirely from theoretical constructions, so the fact that it bears similarity to empirical techniques ought to reinforce that both our method and the empirical techniques it is similar to are on the right track, rather than counting against us. In our experiments we do attempt to separate the effect of lots of regions from the mathematical regularization from Theorem 3.1 by training scaling parameters independently. We find that for 3 out of 4 convex functions it’s slightly worse when left unregularized, but still orders of magnitude better than random initialization. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your rebuttal. I have increased my score. However, while your results are interesting and promising, I still believe your manuscript is lacking in clarity. If you want to reach an audience beyond the researchers directly working on the same topic, I would advice to revise the manuscript such that the actual goal and method become more clear.
Summary: This paper proposes an new parameterization and pretraining method for ReLU networks to ensure that the resulting function is a piecewise-linear mapping with the maximal number of "linear regions" (i.e., $2^{d}$ for a ReLU network of depth $d$). The motivation for such a parameterization appears to be the following: while the number of "linear regions" has been used in the literature as a proxy for expressive power, standard weight parameterizations (and associated randomized initialization schemes) for ReLU networks produce mappings with an average number of linear regions that is invariant to the depth $d$. The paper presents numerical experiments for learning 1-D and 2-D functions as well as training neural networks for image classification. Claims And Evidence: As far as I understand, the main claim is that "forcing" the number of linear regions to be exponential during the first stage of training can lead to significant improvements in approximation quality. Currently, the evidence is (in my opinion) quite limited: - There are no "universal approximation" results for the proposed parameterization. - The authors claim that their "pretraining algorithm acts as a preconditioner for, or guide to, the loss landscape". However there is no discussion of things like convergence rates, conditioning of the loss in the proposed parameterization etc. - The numerical results on the largest-dimensional setting are limited to a small number of epochs and the description of the experiment is lacking (for example, the authors describe that they reduced the number of parameters "slightly" without specifying how). Other experiments yield worrying results: in Appendix A.10, the authors describe a dense and a block-diagonal version of a neural network model, and the new parameterization does not yield improvements in the dense case. Methods And Evaluation Criteria: Please refer to my comments under "Claims and Evidence". Theoretical Claims: I did not have time to verify the validity of Theorem 3.1. Experimental Designs Or Analyses: Please refer to my comments under "Claims and Evidence". Supplementary Material: I reviewed sections A.9 - A.11. Relation To Broader Scientific Literature: The paper implicitly assumes that the number of linear regions is a good proxy for the approximation quality of the neural network. However, the literature includes work suggesting that overly expressive mappings are not necessarily good (without necessarily implying that deeper = worse). See, for example the following works: [1](https://arxiv.org/abs/2305.15598), [2](https://arxiv.org/abs/2209.15055), and references therein; see also [3](https://arxiv.org/abs/2011.04268) for a perspective on why regularization is important for solving inverse problems with deep neural networks. Essential References Not Discussed: I would not consider the following a "classic" reference, but appears highly related to the structure of optimal weights for deep ReLU networks: https://stanford.edu/~pilanci/papers/geometric_algebra.pdf Other Strengths And Weaknesses: - The results on CIFAR-10 and Imagenet are promising (albeit for a very limited number of epochs; the exact experiment setup is also unclear). Consider moving them to the main text rather than hiding them in the appendix. - In my opinion, the most interesting plot in this paper is the "learning rate stability" plot (Figure 13). The paper's main claim would be much stronger if the authors were able to produce such a plot for high-dimensional problems. Unfortunately, even in this plot, it is unclear if the learning rate is the same during the first and second stages of training. - The presentation of the proposed parameterization needs to be improved. The extension to arbitrary dimensions, which is the most realistic setting, is not addressed at all in the main text. I am overall interested in this paper and whether the claimed advantages persist in other problem settings, beyond simple regression and classification problems (e.g., solving inverse problems). However, in my opinion the paper needs considerable revision before it's ready for publication. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments and perspective on our paper. In light of your observations, there are a few important aspects of this paper we would like to emphasize that were maybe not immediately clear. The first is regarding our claims about navigation of the loss landscape. When we say we’re improving navigation of the loss landscape, we’re not saying our paper is about proving convergence rates. Instead, what we mean is that we’ve simplified learning for the network so that it can discover better minima that it would not otherwise. In other words, we’ve disentangled the tasks of learning ‘how do I build functions efficiently?’ from ‘what does this data say?’ - the former question being one that ReLU + random initialization + gradient descent pathologically cannot answer. We agree that while having many linear pieces is necessary to approximate a nonlinear function, it isn’t always sufficient; those extra pieces could be useless or even detrimental if not regularized. We address this with Theorem 3.1 (Unfortunately due to length constraints, most of the math got separated from the main body). Theorem 3.1 is a mathematically principled and explicit method of regularization. It adapts the construction of x^2 presented in the introduction to work with asymmetric triangular waveforms, ensuring that the network output is differentiable by closing all the ‘gaps’ in the derivative (in the infinite depth limit). Without following Theorem 3.1, it’s possible to produce several different kinds of fractal structures that will likely not have good generalization properties. You mention that there are no universality results, and that is correct. The function family given by Theorem 3.1 is highly non-universal. x^2 is the only well-known member, and the other functions share a certain kind of dilated self-symmetry with x^2. However, even though it isn’t a universal parameterization, the triangle parameterization can get close enough to other one-dimensional convex curves that ordinary gradient descent can nudge the network to produce an otherwise unlearnably low loss (by a factor of up to 1000). This suggests that it’s potentially valuable to search for a better and more expressive parameterization to extend that result to more challenging settings; yet we believe these numerical results already merit sharing the present parameterization and strategy with the research community. You’ve also pointed out that the method when adapted to higher dimensions struggles outside of a block diagonal format. This result is actually to be expected, as all the mathematical development we have done is aimed at building 1-d convex functions using 4-neuron-wide networks. When the weight matrices are freed from the block diagonal constraint, the overwhelming majority of their weights need to be filled in with no additional mathematical insights, so it shouldn’t work well. The authors were pleasantly surprised that the experiments on real data work with any meaningful advantage, given the large mathematical gaps that remain unanswered. That is why the more practical results appear in the appendix (even though they are perhaps more interesting to the average ICML attendee). We see the value of this paper not as being a production ready method, or as being a complete mathematical theory of how to get exponentially better weight setting, but as containing important ideas that can help other researchers along in this direction. Replacing the bedrock algorithms of deep learning is probably a larger task than what one paper can accomplish, and this paper is at a natural stopping point where its substance requires a lot of space to convey, and the remaining challenges ahead are each nontrivial. We think the geometric algebra paper you’ve linked seems very interesting, and we’ll incorporate a citation to it. Also, we’ll work to clarify when the parameterization switch happens in the experiments.
Summary: This paper shows how to build better regressions with ReLU feedforward networks. The key idea is to exploit the piecewise linear representation produced by such networks, with the philosophy that models with more of such pieces are likely to better interpolate the function of interest. These pieces are typically called linear regions in the ML literature. In the context of this paper, each piece corresponds to a different gradient of the function being approximated by the regression. For nonlinear functions, it stands to reason that exponentially many pieces are needed for ensuring a good overall approximation There have been a number of papers that show how to obtain a model with an exponential number of linear regions on the network depth. In practice, however, the average number of linear regions is typically polynomial at best when the parameters are obtained from commonly used initializations and also from training by gradient descent regardless of how they were initialized. The differentiation in this paper is that most of the training (described as pretraining) is carried out through a parameterized space of models of high expressiveness. By adjusting the peaks and valleys produced by each layer, the ordinary parameters (weights and biases) are obtained automatically from those choices. At the very end, regular training with Adam over the parameters is carried out for a limited time. **In full disclosure, I have previously reviewed this paper at NeurIPS 2024 (reviewer 8qXf ). The authors addressed most of my questions then. I am surprised that this paper did not get in then, as it had only one opposing reviewer (scores were 7-7-5-3). ** Claims And Evidence: The construction used by the authors for parameterizing models of high expressiveness is correct, and in fact known and explored by many authors before (Montufar et al., 2014; Telgarsky, 2015; Serra et al., 2018; Huchette et al., 2023) - all of which acknowledged in their work. I would argue that what they are doing is the next logical step: operationalizing trained neural networks with high expressiveness. The insight of doing that by parameterizing a subspace of models is their biggest contribution here. As they observe, this is an alternative to the use of splines, such as in KANs, to which I would add that preserving the model piecewise linear has algorithmic and computational advantages. Methods And Evaluation Criteria: Yes. Theoretical Claims: Regarding Theorem 3.1 and surrounding discussion, I would appreciate if the authors precisely described the value of the network parameters in terms of the scaling factor $s_i$. If the zigzagging function being defined in each layer goes back and forth between 0 and 1, so that composing the function across layers produces an exponential number of pieces, then how can it work properly with scaling? Experimental Designs Or Analyses: I believe that the experiments are adequate. Unlike in the prior submission of this work, the authors have also explored nonconvex and bivariate functions. To be clear, in the context of mathematical optimization, piecewise linear approximations of nonlinear functions rarely go very far in terms of dimension. It may sound strange, but regression can be more challenging than classification. I appreciate the effort of the authors in also including preliminary results about classification in the appendix, but I believe that this is not central to their work. There is plenty of work done in improving classification, for which reason even a meaningful contribution may have only a marginal impact when implemented. I believe that it is important the other reviewers understand this nuance. Supplementary Material: I checked A.11 (application to CIFAR-10 and Imagenet). Relation To Broader Scientific Literature: See "Claims and Evidence" above. Essential References Not Discussed: The authors do a good job with references (see "Claims and Evidence" above). Other Strengths And Weaknesses: See other items of this review. Other Comments Or Suggestions: Related to my comment about Theorem 3.1, I would appreciate if the authors were to write down the equations for each neuron. Figure 2 does a reasonable job, but treating the bias as a unit is confusing. Likewise, the discussion about the need for the sum unit is not entirely clear to me. I would have appreciated an example of before and after to explain the significance of having that unit. In Figure 3, it would be helpful if the value of $a_i$ in each layer $i$ was given. Questions For Authors: Please comment on my question about Theorem 3.1 above, and on explicitly writing the equation of each neuron. If Figure 3, is the sum always 1 at x=1? In Page 4, what do you mean by "would form a 1 -> 2 -> 1 -> 2 -> 1... shape"? The terms "pretraining skipped", "differentiability not enforced", and "differentiability enforced" are not very clear. If I get it right, are these equivalent to "new initialization, no pretraining", "do pretraining by adjusting peaks and valleys without the sum unit", and "do pretraining by adjusting peaks and valleys with sum unit", all of which followed by regular training? Why are you only reporting min values in Table 3, as opposed to min and mean as in Tables 1 and 2? That seems less informative. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We’re thrilled that you enjoyed our paper, and we appreciate your belief in the merits of this work. The sections you found confusing are things we agree we could clarify, so we’re extremely grateful for your guidance. In a single-hidden-layer network, each hidden neuron provides a basis function to the output (in that case the basis functions are ReLUs at different offsets and orientations). The networks in this paper are narrow and deep, and the basis we would like to use to build the output is the triangle waves produced at each layer (one peak, two peaks, 4 peaks, etc…). The sum neuron, acting similarly to a residual connection, is what allows these hidden features to pass through the remaining layers to be visible to the network output. Without the sum neurons, the network can only output a triangle wave with 2^d peaks. With the sum neurons, the network can perform the approximation of x^2 discussed in the introduction, as well as approximating a family of differentiable convex functions around it (which is the essence of theorem 3.1, it shows the correct coefficients for the sum neuron to use that will close the ‘holes’ in the derivative). Appendix A.1 gives the equations for the neurons in matrix form. It got separated from section 3 due to the length requirements. But we agree that a description of the individual neurons in equation form is probably more clear, so we will try to rework section 3 to do so. Perhaps the following is a more clear explanation of the network structure: Building a triangle: $t1(x) = ReLU(x)$ $t2(x) = ReLU(x-a)$ $output = \frac{t1}{a} - \frac{t2}{a-a^2}$ The weight $\frac{1}{a}$ is chosen so that the output is 1 at $x=a$. the weight on $t2$ is equal to $\frac{1}{a} + \frac{1}{1-a}$ so that it negates $t1$ and then makes the output zero at $x=1$. This would give the diamond-shaped network shown in the top left of Figure 2. If we wanted to compose this network to make more oscillations, then the output node would become the input node for the next diamond-shaped network, which gives the awkward 1x2x1x2x1x2… pattern, which shows up in some of the background literature. Instead of collecting the $t1$ and $t2$ neurons into the output unit, they can be assembled directly in the input of the next layer using: $t1_{i+1} = ReLU (\frac{t1_i}{a_i} - \frac{t2_i}{a_i-a_i^2})$ $t2_{i+1} = ReLU (\frac{t1_i}{a_i} - \frac{t2_i}{a_i-a_i^2} - a_{i+1})$. (Maybe this is an unimportant distinction that can be skipped, and we could just give the equations in a constant-width format, or maybe it’s important to go over - we are open to feedback.) The sum neuron can be computed as: $sum_{i+1} = ReLU(sum_i - s_i*(\frac{t1_i}{a_i} + \frac{t2_i}{a_i-a_i^2}))$. The ReLU is irrelevant here since the output is always positive. We need to do one more trick to avoid having $s_i$ directly stored as a weight since it will be exponentially small (which could be problematic for storing or optimizing it). We use the network to iteratively apply the ratio $S_i = s_i/s_{i-1}$ in each layer, decaying the amplitude of the outputs of the t1 and t2 neurons. This also means that the bias has to be a neuron, so that it too can gradually scale down. $sum_{i+1} = ReLU(sum_i - S_i*(\frac{t1_i}{a_i} + \frac{t2_i}{a_i-a_i^2}))$ $t1_{i+1} = ReLU(S_i*(\frac{t1_i}{a_i} - \frac{t2_i}{a_i-a_i^2}))$ $t2_{i+1}) = ReLU(S_i*(\frac{t1_i}{a_i} - \frac{t2_i}{a_i-a_i^2} - a_{i+1}b_i))$ $b_{i+1} = ReLU(S_i*b_i)$ In Figure 3 the sum will always be equal to 1 at $x=1$ since the network is set up to subtract the triangle waves from each layer from y=x, and the waves all output 0 at $x=1$. Labeling the tables is quite difficult since the experiments are complicated to describe, and there is only room for 2 or 3 words. You are correct about “Pretraining skipped” - which will encode the triangles into the network weights at initialization, but then only do ordinary gradient descent. The other two labels “differentiability enforced” and “differentiability not enforced” are about the scaling factors, rather than the sum neuron being present/absent. Theorem 3.1 is about how to pick the scaling factors ‘correctly’ to sum the waves in the sum neuron to get a differentiable output. Weighting the waves differently in the sum can give you fractals or other badly behaved functions in the output. The 1-dimensional experiments show that ‘holding gradient descent’s hand’ is actually helpful, and that choosing the scaling coefficients to make the output differentiable can act as an explicit regularizer. The reason only the minimum values are reported in table 3 is that all 4 of the functions are included, so we needed to save space. The minimum is more important to look at than the mean because we’re comparing against a standard random network, which will collapse from the dying relu issue half of the time (and thus have a disproportionately bad mean).
null
null
null
null
null
null
Catoni Contextual Bandits are Robust to Heavy-tailed Rewards
Accept (spotlight poster)
Summary: This paper studies contextual bandits with heacy-tailed reward. If the variance is known, the authors use catoni estimator to achieve an near optimal regret upper bound. If the variance is unknown, but the variance of variance is bounded by variance times a constant factor, then the authors propose another method that first uses catoni estimator to estimate the variance, and then use one more catoni estimator based on the estimated variance. They prove that it also achieves a near optimal regret upper bound. =====After rebuttal===== I read the rebuttal. My score remains. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I do not check the detailed proofs in appendix. Experimental Designs Or Analyses: N/A Supplementary Material: I do not check the detailed proofs in appendix. Relation To Broader Scientific Literature: It could be helpful in real-world applications with heavy-tailed noises, for example, the waiting time in routing systems. But no such experiments are provided in the paper. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The paper's algorithm do have better regret upper bounds when the variances of arms are limited. - The concentration for Catoni estimator is a novel contribution. Weaknesses: - In Algorithm 1, choosing the action $x_t$ and estimating the $\hat{f}_t$ seem require to use a double-oracle, and can be inefficient. Even if Algorithm 3 do some improvement on estimating the $\hat{f}_t$, it can still be inefficient when choosing $x_t$. - There is no experiments, and I am wondering the applicabiity of these results. - Some parts are not clear enough, please see questions below. Other Comments Or Suggestions: N/A Questions For Authors: 1. In table 1, it seems that DistUCB has a better regret. In the footnote, it is explained that "DistUCB needs to estimate full reward distribution rather than just the mean", does it mean it will have a larger $ \tilde{d}_F$? If the function class is restricted to Gaussian or exponential, what is the difference between $ \tilde{d}_F$ and $ d_F$? 2. In Assumption 4.1, it is assumed that $Var[\eta^2] \le cVar[\eta]$. Is it common to have a small $c$ in practice? For example, if the noise distribution is exponential, this $c$ could be very large. 3. In Theorem 3.1, the regret lower bound depends on $\sigma_t$, which depends on $\pi_t$, i.e., the used policy. This is a little strange. In my opinion, the regret lower bound should be independent with the used policy. Can you explain more about this? 4. Why Eq. (4) holds? Do you miss a "[ ]"? 5. I can understand why equation (3) makes sense. But why we need to use this kind of "indirect" estimator? What if we just let $\hat{f}_t$ be that one with minimum $\sum_i E[(f(x_i) - y_i)^2]$? Could you explain more about this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful suggestions. **Q1**: In Algorithm 1, choosing the action $x_t$ and estimating the $\hat f_t$ can be inefficient. **A1**: The double oracle is a standard way for optimism in online RL and bandits with general function approximation [1,2,3]. In linear function approximation, we can compute bonus $b_t$ efficiently [4] and choose $x_t=argmax_{x\in\mathcal X_t} \hat f_t(x) + b_t(x)$. However, there is no counterpart for general function approximation even when the reward range has a small bound. How to calculate the bonus efficiently is an open question even with bounded rewards. **Q2**: There is no experiments. Applicability? **A2**: Our work primarily focuses on theoretical analysis, and serves as a first step to use the Catoni estimator for nonlinear function approximation. We provide rigorous proofs demonstrating that the weighted Catoni estimator is robust against heavy-tailed noise. Although the proposed theoretical algorithm is not computationally efficient, our analysis serves as a first step toward designing robust algorithms for general function class. We believe the Catoni estimator and the variance-aware weighting technique offer valuable insights for practical use. Developing an efficient, practical version of the algorithm is a question for future work. **Q3**: In table 1, it seems that DistUCB has a better regret. Does footnote mean it has a larger $\tilde d_F$ If the function class is restricted to Gaussian or exponential, what is the difference between $\tilde d_F$ and $d_F$? **A3**: Note that DistUCB does not achieve a better regret bound due to its polynomial dependence on the reward range $R$. Moreover, the order on $\tilde d_F$ is generally incomparable to $d_F$. DistUCB assumes **realizability of the reward distribution**, i.e. true reward distribution $r = R + \epsilon$ belongs to the considered class of distributions. In contrast, we only assume realizability of the reward mean $R$. Therefore, the footnote indicates that their assumption is stronger than ours. According to Definition 5.2 in the DistUCB paper, $\tilde d_F$ measures the complexity of a class of distributions, while $d_F$ measures the complexity of a class of mean functions. These two notions are incomparable. Even in the Gaussian case, the total variation distance between $N(f, v_f)$ and $N(g, v_g)$ is incomparable to $(f - g)^2$. Further, the noise distributions may be complicated and not even satisfy the realizability. **Q4**: Assumption 4.1, is it common to have a small c in practice? **A4**: The fourth moment bound is required in most prior works (e.g. Li, et. al. and Huang, et. al) that handle the unknown variance case. Our assumption $Var[\eta^2] \le c Var[\eta]$ is equivalent to theirs when $Var[\eta]$ is bounded. Additionally, bounded second and fourth moment is already a substantially weaker condition than a bounded range within $[0,1]$. **Q5**: In Theorem 3.1, the regret lower bound depends on the used policy. **A5**: The intuition is as follows. Consider two bandit problems, each with two arms. These two bandits differ only in the reward distribution of the second arm. In both bandits, the first arm has a zero variance, and the second arm has variance $\sigma^2$. The maximum regret between these two instances depends on how often the second arm is chosen, which is given by $\mathbb{E}\sum_t \sigma_t^2 / \sigma^2$. Therefore, we obtain a lower bound directly related to the variances of the selected actions. The construction is not policy dependent, but a simple argument that there are two problem instances, and any algorithm incurs a large regret in one or the other. **Q6**: Why Eq. (4) holds? Do you miss a "[ ]"? **A6**: We miss a "[ ]" and will correct it in the revision. **Q7**: Why "indirect" estimator in (3)? What if we just let $\hat f_t$ be that one with minimum $\sum_iE[(f(x_i)-y_i)^2]$? **A7**: We have explained the intuition in Lines 203-219 right. Specifically, the reason is that directly solving $$ \mathrm{argmin}_f Catoni(\{\frac{1}{\bar\sigma_i^2}(f(x_i)-y_i)^2\}) $$ does not work well. If we use the optimization above and use Lemma 3.2 with $Z_i=\frac{1}{\bar\sigma_i^2}(f(x_i)-y_i)^2$ that is on second-order of noise, then, we need to deal with the fourth moment of noises in the analysis even for the known variant case since Lemma 3.2 needs bounded variance of $Z_i$. In our approach, the indirect estimator helps to cancel the higher order of $y_i$ and make the term $(f(x_i)-f'(x_i))(f(x_i)-y_i)$ easier to analyze. [1] Jin C, et. al. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. NeurIPS, 2021. [2] Liu Q, et al. When is partially observable reinforcement learning not scary?. COLT, 2022. [3] Agarwal A, et. al. VO $ Q $ L: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation. COLT, 2023. [4] Abbasi-Yadkori Y, et. al.. Improved algorithms for linear stochastic bandits. NeurIPS, 2011.
Summary: This paper studied the setting of variance-aware contextual bandit (or second order bandit). Specifically, this paper aims to develop algorithms whose regret is upper bounded by the variance of noise. Suppose the noise is between $[-R, R]$, all previous literatures which also studied this question obtained regret bounds that have polynomial dependence on $R$. This paper adopts the Catoni estimator from the robust statistics literature, and develops an algorithm whose regret upper bound is only log dependent on $R$. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I checked the proofs for theoretical claims. All arguments seem to be correct, except from typos. Experimental Designs Or Analyses: No experiments in this paper. Supplementary Material: Yes, I reviewed the supplementary materials, which contain the proof. Relation To Broader Scientific Literature: Previous literatures which studied the same problem all have regret bound polynomially depending on the noise scale, this paper developed an algorithm whose regret upper bound is only log dependent on the noise scale. Essential References Not Discussed: The following reference is also related to this paper. I suggest the authors provide comparison to results therein. [1] Z Jia, J Qian, A Rakhlin, CY Wei "How Does Variance Shape the Regret in Contextual Bandits?" Other Strengths And Weaknesses: Strength: 1. The algorithm and analysis technique allows us to obtain a regret bound that only has logarithmic dependence on the noise scale. This provides a different perspective of variance-aware algorithms for contextual bandits. 2. This paper is well-written, except for a few typos. Weakness: 1. The techniques used in this paper are not new. This paper is not the first paper that adopts the Catoni estimator in designing bandit algorithms. Other Comments Or Suggestions: Typos: 1. Line 272, in the left: $\mathbb{E}[Z_i|x_i]$ to $\mathbb{E}[Z_i(f, f')|x_i]$ 2. Line 754, right hand side: $E[\frac{1}{\bar{\sigma}_i^2}...]$ to $E[\frac{1}{\bar{\sigma}_i^4}...]$ 3. In the description of Lemma 3.6 (also Lemma B.5 and the proof therein): the minimizer $f'$ to the maximize $f'$ Suggestions: 1. I suggest the authors use a different notation in $L_t(f, f')$, since it is confusing to $L_f$ Questions For Authors: I have the following questions: 1. In the case where the variances are all known, the algorithm in the literature I referred above can achieve regret $\sqrt{A * \sum_{t=1}^T \sigma_t^2 * \log|F|} + d_{elu}$. However, the setting therein requires the noise scale bounded by 1. Is it possible to improve your results so that the dominating term also scales with $\sqrt{A * \sum_{t=1}^T \sigma_t^2 * \log|F|}$, while keeping the dependence on the scala of noise to be logarithmic? 2. In the current setup, the variance cannot depend on the action chosen. If the variance depends on the action chosen at each time step, will the results still work? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive advice! **Q1**: The Jia, et. al. is also related to this paper. I suggest the authors provide a comparison to results therein. **A1**: We will cite their work and provide a comparison in the revision, and want to mention that Jia, et. al. is contemporary with our work. The main difference is that they assume that the noise is bounded such that $r_t\in[0,1]$, and we consider heavy-tailed rewards. Besides, although we both get variance-dependent bounds, the focuses are distinct: they aim to obtain better regret bounds when the eluder dimension is larger than the number of actions, while we aim to obtain logarithmic dependence on the reward range. Regardless of the dependence on reward range, for the weak adversary with revealed variance, our upper bound $O(\sqrt{\Lambda\cdot d_{elu} \log N})$ is incomparable to theirs $O(\sqrt{A\Lambda \log N} + d_{elu}\log N)$. For the strong adversary, we both use the peeling technique and thus, our bound is superior on the dependence of reward range. **Q2**: The techniques used in this paper are not new. This paper is not the first paper that adopts the Catoni estimator in designing bandit algorithms. **A2**: Although the Catoni estimator has previously been applied in bandit problems, it has not yet been used with nonlinear function approximation, so we provide new methods and analysis. To the best of our knowledge, our newly designed estimator in (3), which is based on excess risk, is the first one capable of handling heavy-tailed noise in a nonlinear function class. Other robust estimators, such as Huber regression and the median-of-means estimator (Lugosi and Mendelson, 2019), are limited to linear vector space structures. Additionally, we introduce a novel analysis method to establish concentration results. Existing analyses for linear heavy-tailed bandits rely explicitly on linear structures and do not generalize to the nonlinear case considered here. Furthermore, for the unknown variance scenario, we employ a peeling technique, which allows our approach to avoid the need for approximating the noise variance using a function class. **Q3**: In the case where the variances are all known, the algorithm in the literature I referred above can achieve regret $\sqrt{A*\sum_{t=1}^T\sigma_t^2*\log|F|}+d_{elu}$. However, the setting therein requires the noise scale bounded by 1. Is it possible to improve your results so that the dominating term also scales with $\sqrt{A*\sum_{t=1}^T\sigma_t^2*\log|F|}$, while keeping the dependence on the scala of noise to be logarithmic? **A3**: That is an interesting question for future work. Currently, we develop algorithms based on the OFUL structure, while Jia et al. build on the SquareCB approach. It might be feasible to connect the concentration arguments in future work. We will add this discussion to the final version. **Q4**: In the current setup, the variance cannot depend on the action chosen. If the variance depends on the action chosen at each time step, will the results still work? **A4**: We would like to claim that our regret bounds in both Theorem 3.4 and 4.2 depend on the variance $\sigma_t$ of the chosen actions at each time step. $\sigma_t$ is defined in Line 128 left and is the variance of the noise of chosen actions. **Typos**: We will correct them in the revision. **Suggestions**: Thanks for the suggestions. We will use a difference notation to avoid confusion. [1] Jia, Zeyu, et al. How Does Variance Shape the Regret in Contextual Bandits?. NeurIPS, 2024.
Summary: In this work, the authors proposes a novel algorithm to tackle the contextual bandit problem in the presence of heavy-tailed noise assumptions. In particular, they assume that the variance of the noise is finite and deal with the scenario in which (i) the noise variance is known to the learner, improving existing literature bounds and (ii) the noise variance is unknown to the learner, obtaining nearly matching regret bounds. Claims And Evidence: Yes, every claim is supported by a proof. Methods And Evaluation Criteria: There is no experimental evaluation in the paper. Theoretical Claims: I went through some of the proofs. Each of them seems correct to me. Experimental Designs Or Analyses: There is no experimental evaluation in the paper. Supplementary Material: I quickly went through some of the proofs. Relation To Broader Scientific Literature: As the authors discuss, their results strictly improve the ones from previous literature, especially in the known variance scenario. Regarding the adaptation to variance, it is less clear what different assumptions were made by previous works. In this work, the authors make a reasonable fourth-moment assumption, which already exists in the bandit literature [1]. Moreover, in heavy-tailed bandits, a dedicated sub-literature on adaptation to the unknown noise variance/1+epsilon moment exists [2,3,4]. I would be interested in knowing how this work relates to them: here, the focus is on the contextual scenario, but what happens if I cast these results to the unstructured MAB setting? Will they improve over the existing works? References [1] LATTIMORE, Tor. A scale free algorithm for stochastic bandits with bounded kurtosis. Advances in Neural Information Processing Systems, 2017, 30. [2] HUANG, Jiatai; DAI, Yan; HUANG, Longbo. Adaptive best-of-both-worlds algorithm for heavy-tailed multi-armed bandits. In: international conference on machine learning. PMLR, 2022. p. 9173-9200. [3] GENALTI, Gianmarco, et al. $(ε, u) $-Adaptive Regret Minimization in Heavy-Tailed Bandits. In: The Thirty Seventh Annual Conference on Learning Theory. PMLR, 2024. p. 1882-1915. [4] CHEN, Yu, et al. uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs. arXiv preprint arXiv:2410.03284, 2024. Essential References Not Discussed: All the essential and related literature on contextual MABs has been discussed. For some additional literature on adaptation to the noise variance in HT MABs see the previous box. Other Strengths And Weaknesses: The paper is well-written, and the approach's strength is clear and provable. Authors make sure to compare with the existing literature and highlight the extent of their improvement. Moreover, the proofs seem correct. Other Comments Or Suggestions: there's a typo in Table 3, parameter $\lambda^l$: prameter -> parameter. Questions For Authors: I don't have any relevant question, except for the one already stated in the related works box. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your helpful advice! **Q1**: In heavy-tailed bandits, a dedicated sub-literature on adaptation to the unknown noise variance/1+epsilon moment exists [2,3,4]. I would be interested in knowing how this work relates to them: here, the focus is on the contextual scenario, but what happens if I cast these results to the unstructured MAB setting? Will they improve over the existing works? **A1**: We will cite [2,3,4] and include a discussion in the revised version. However, we want to highlight that our formulations and techniques differ significantly. Our work focuses on the contextual setting with general function approximation, whereas [2,3,4] consider the standard multi-armed bandit (MAB) setting. It is definitely possible to obtain sharper gap-dependent bounds in the MAB setting, but in the most general case, we can get $\sigma\sqrt{KT}$ bounds in the MAB literature, while one has $\Omega(d\sqrt{T})$ lower bounds even in the linear setting with large action spaces [1]. Consequently, the results are not directly comparable. We will add this discussion to the final version. Regarding algorithms, we adopt the OFUL framework combined with weighted Catoni estimators and peeling techniques. In contrast, [2] uses a skipping method based on Follow-the-Regularized-Leader, and [3,4] design adaptive algorithms capable of handling unknown $\alpha$ and unknown moment bounds. **Q2**: there's a typo in Table 3, parameter $\lambda^l$: prameter -> parameter. **A2**: Thanks for the correction. We will correct it in the revision.
Summary: This paper introduces contextual bandit algorithms that are robust to heavy-tailed rewards by leveraging Catoni’s mean estimator from robust statistics. The authors propose two algorithms: Catoni-OFUL for the known-variance setting, and VACB, for the unknown-variance setting. Both algorithms achieve regret bounds that depend on the cumulative reward variance and scale only logarithmically with the reward range $R$, improving upon prior work that exhibits polynomial dependence on $R$. In the unknown-variance case, the authors avoid direct variance estimation by employing a peeling-based approach combined with a plug-in estimator derived from Catoni’s method. A matching lower bound is established in the known-variance setting, demonstrating the optimality of the leading-order term in the regret bound. Claims And Evidence: Most of the claims are well-supported. 1) Extensive comparisons (see Table 1) with existing algorithms highlight the advantages of the proposed methods in heavy-tailed reward settings. 2) The concentration results for Catoni’s estimator are carefully stated and integrated into the analysis with appropriate rigor. 3) The proof sketches and supporting lemmas provide a clear logical path from algorithm design to the stated regret bounds. One minor concern: the challenges of solving the min-max optimization in Eq. (3) are acknowledged but not explored in depth. While Algorithm 3 is proposed as a more efficient alternative, the paper does not discuss its trade-offs in detail. For example, what are the theoretical or empirical pros and cons of this variant? Could a similar approach be extended to the unknown-variance setting? Methods And Evaluation Criteria: N/A Theoretical Claims: Overall, the proofs are technically sound based on the provided sketches and lemmas. For the known-variance case: The authors leverage a uniform concentration inequality (Lemma 3.2) for Catoni’s estimator. This result is a non-trivial extension of prior Catoni bounds and is central to constructing valid confidence sets. For the unknown-variance case: The authors show how to control variance-normalized excess loss without exact variances. Lemma 4.4 demonstrates that the plug-in estimator for the cumulative variance (based on Catoni’s method) remains accurate up to logarithmic factors. The authors then carefully control the contribution to regret from different uncertainty levels using a peeling argument. Experimental Designs Or Analyses: There is no empirical evaluation in this paper. While acceptable for a theory-focused paper, a small experiment could have illustrated practical benefits of robustness. Nevertheless, the theoretical evaluation is comprehensive and grounded. Supplementary Material: I briefly go through Appendix B. Proofs for the Known Variance Setting. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No major omissions. Other Strengths And Weaknesses: Novel and principled use of Catoni estimator in a contextual bandit setting. Other Comments Or Suggestions: See above. Questions For Authors: 1) In the linear reward setting with known variances, does Catoni-OFUL incur slightly worse regret compared to existing algorithms such as AdaOFUL (as shown in Row 2 of Table 1)? Could the authors clarify whether this is due to the generality of the function class or an artifact of the analysis? 2) Theorem 4.2 achieves a variance-dependent regret bound that matches the known-variance case (Theorem 3.4) up to a slightly worse dependence on the eluder dimension. Could the authors provide more insight into whether this additional dependence is intrinsic to the peeling-based approach, or whether it might be improved with a different algorithmic or analytical technique? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your constructive suggestions! **Q1**: The challenges of solving the min-max optimization in Eq. (3) are acknowledged but not explored in depth. While Algorithm 3 is proposed as a more efficient alternative, the paper does not discuss its trade-offs in detail. For example, what are the theoretical or empirical pros and cons of this variant? Could a similar approach be extended to the unknown-variance setting? **A1**: Theoretically, the two algorithms have the same regret bound. Computationally, Algorithm 3 is more efficient since it picks one function from the constructed candidate set instead of solving a min-max optimization. Despite this advantage, we chose to present Algorithm 1 in the main text because it is simpler, clearer, and easier to explain, and its design is better aligned with OFUL-style algorithms which readers might find familiar. For the extension to the unknown-variance case, Algorithm 3 can be extended to the unknown-variance case with similar techniques and obtain almost the same result. We will add the extension in the revision. **Q2**: There is no empirical evaluation in this paper. While acceptable for a theory-focused paper, a small experiment could have illustrated practical benefits of robustness. Nevertheless, the theoretical evaluation is comprehensive and grounded. **A2**: Our work primarily focuses on theoretical analysis, and serves as a first step to use the Catoni estimator for nonlinear function approximation. We provide rigorous proofs demonstrating that the weighted Catoni estimator is robust against heavy-tailed noise. Although the proposed theoretical algorithm is not computationally efficient, our analysis serves as a first step toward designing robust algorithms for general function class. We believe the Catoni estimator and the variance-aware weighting technique offer valuable insights for practical use. Developing an efficient, practical version of the algorithm is an important question for future work. **Q3**: In the linear reward setting with known variances, does Catoni-OFUL incur slightly worse regret compared to existing algorithms such as AdaOFUL (as shown in Row 2 of Table 1)? Could the authors clarify whether this is due to the generality of the function class or an artifact of the analysis? **A3**: The regret bounds match in the dominant term and are worse in $d$ for the non-dominant term. The reason for this difference is that the linear vector space allows a smaller variance-based weight for regression, with $d^{0.25}$ on the denominator according to Lemma B.1 and B.2 of Li, et. al. Their arguments do not generalize to the nonlinear structure, however, and we can only use $\sqrt{\log N\cdot D_{F_{t-1}}}$ ($\iota(\delta)=\Theta(\sqrt{\log N})$) in weights $\bar \sigma_t$ to balance the orders in Lemma B.2. The expense of using a larger weight is to get additional dependence on the non-dominating term (Lines 1013-1026). How to match this non-dominating term is an interesting direction for future work. **Q4**: Theorem 4.2 achieves a variance-dependent regret bound that matches the known-variance case (Theorem 3.4) up to a slightly worse dependence on the eluder dimension. Could the authors provide more insight into whether this additional dependence is intrinsic to the peeling-based approach, or whether it might be improved with a different algorithmic or analytical technique? **A4**: This additional dependence comes from the peeling approach in general function approximation and also appears in [1,2]. The linear analysis for peeling in Zhao. et. al is restricted to the linear vector space structure and special form of uncertainty and thus can not be extended to nonlinear space. Hence, in our analysis Lines 332-337, we can only bound $\frac{(f(x_i)-f'(x_i))^2}{w_i^2}$ by $2^{-2l}\cdot\beta_{t-1}^2$ thus leading to the worse order, while $\|\hat\theta-\theta_*\|_{\Sigma_t}$ is upper bounded by $2^{-2l}\sum_i\sigma_i^2/w_i^2$ without the additional order on d. [1] Pacchiano, A. Second order bounds for contextual bandits with function approximation, 2024. [2] Jia, Zeyu, et al. How Does Variance Shape the Regret in Contextual Bandits?. NeurIPS, 2024.
null
null
null
null
null
null
NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Prediction
Accept (poster)
Summary: This paper introduces PARROT, a system designed to handle dual-channel spoken dialog using large language models (LLMs). Authors highlight importance of capturing conversational features such as overlaps, pauses, and interruptions, to provide more realistic spoken interactions. Building upon previous work like dGSLM and Moshi, the authors explore how to utilize dual-channel speech data within modern LLMs. The core of their approach is a Next-Token-Pair Prediction (NTPP) paradigm, where a decoder-only transformer is trained to predict the next pair of speech tokens based on past dialog (it predicts at time t the next token pair (a,b) of both channels: user channel and bot channel). Their dual-channel transformer architecture can also benefit from recent KV-cache optimizations for lower inference latency. They compare PARROT with existing methods (dGSLM & Moshi) along metrics/tasks such as conversation event simulation, interruption response success rate, and inference latency. Claims And Evidence: Yes they mostly are but there is a problematic claim i describe below: lines 68 & 134 => the paper's comparison to previous work (e.g., Moshi) may need revision for accuracy. I think there is a potential issue in the paper’s positioning of prior work. Specifically, Moshi is inaccurately classified as following an encoder-decoder architecture when it actually employs a decoder-only model (Moshi is powered by Helium, a 7B parameter transformer-based language model designed for spoken dialogue. Helium follows a decoder-only structure, similar to GPT models, meaning it generates responses autoregressively without a separate encoder). =>this is to me an important misunderstanding of a related work which weakens the paper positioning; Methods And Evaluation Criteria: Yes they are (although i have a few concerns expressed in the 'experiments & design' section) Theoretical Claims: The paper is mostly experimental, formalizations mostly recap previous work Experimental Designs Or Analyses: -The results in Table 1 need more detail—it's unclear what 0.1 or 0.9 represent in NTPP_0.1 or NTPP_0.9. Additionally, it's difficult to assess whether the differences shown in Table 1 (compared to dGSLM) and Figure 7 (compared to Moshi) are significant or meaningful. A clearer explanation would help in understanding the impact. -Inference latency is better => ok but where does it come from ? From the authors’ architecture or from leveraging more recent KV-cache optimization techniques ? -The most convincing exp to me is the human evaluation, which measures turn-taking naturalness and content meaningfulness. However, more details on the evaluation process would be helpful—such as how listeners were recruited, their backgrounds, etc. Supplementary Material: yes i listened to the examples (code also provided but i did not look into details) The provided audio example compares PARROT with a baseline, but it doesn't truly represent a dialogue—it's more of a response to a question. As a result, its relevance to conversational turn-taking events is unclear. A more representative example would better illustrate the model's strengths in handling dialogue dynamics. Relation To Broader Scientific Literature: It's not clear how original this work is compared to Moshi (Defossez, 2024). Their Next-Token-Pair Prediction (NTPP) method is new, but the idea of handling dual-channel speech in a single sequence using subsequences is not (done in Moshi technical report for instance). Authors should more clearly explain what sets their approach apart. Essential References Not Discussed: ok Other Strengths And Weaknesses: i covered them already in the previous sections Other Comments Or Suggestions: no Questions For Authors: section 5.8 ablation: -can you elaborate more on what the two stage training approach is exactly (that seems to be better) => should be more detailed ? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1, Why we classify Moshi as a encoder-decoder model** Thanks for raising this important and insightful question. While Moshi utilizes the Helium LLM as its temporal transformer—a decoder-only architecture—the inclusion of the RQ-Transformer introduces a spatial transformer component, which deviates from a standard decoder-only structure for several reasons (see Figure 3 on page 13 and Figure 1 on page 7 in Moshi’s paper). First, the RQ-Transformer employs a spatial transformer that encodes input structure information before the autoregressive decoding process. This encoding step explicitly transforms the input data, making it functionally similar to an encoder stage. Second, instead of directly processing tokens in a purely decoder-based manner, the RQ-Transformer constructs a context vector through the spatial transformer. This context vector resembles the latent representations typically computed by encoders in encoder-decoder architectures. Overall, Moshi’s architecture can be viewed as “two decoder-only architectures connected via an encoder-decoder module.” Consequently, unlike a purely decoder-only approach like our NTPP, Moshi requires maintaining two separate KV caches for its two decoder-only transformers. Therefore, Moshi is not a standard decoder-only architecture; instead, we classify it as an encoder-decoder model from a dual-channel modeling perspective. We refer the reviewer to our response to Reviewer muk3 for further details on this matter. **Q2:The major weaknesses of Moshi architectures compared to our NTPP and previous methods** We refer this reviewer to read the Experiments Designs or Analyses Q1 response for the Reviewer muk3 for further details about this question. **Q3: Further Clarifications on Table 1** We followed the evaluation settings for quality and statistical analysis of generated dialogues used in Moshi[^1]. Specifically, we selected 1,000 random 10-second prompts from the Fisher dataset and utilized NTPP and other baseline models to generate dual-channel speech continuations. For each prompt of NTPP, we generated 32 continuations across three different temperature settings [0.1, 0.5, 0.9], as temperature significantly impacts the results. **Q4: Inference Latency Explanations** We employ Voice Activity Detection (VAD) as the latency measurement benchmark, calculating system response delays exclusively during effective speech token segments. This approach more accurately simulates real-world conversational user experience. Our experimental results demonstrate that the latency improvement in multi-round dialogue primarily stems from our novel Next-token-pair Prediction mechanism: By independently calculating the attention values of dual-channel tokens, our method achieves fine-grained time alignment. In contrast, Moshi’s strategy of directly summing dual-channel token embeddings risks introducing inter-channel information interference. **Q5: Further Clarifications on Human Evaluation Experiments.** We followed the evaluation settings of Moshi and conducted the evaluation study with 25 annotators who possess native-level English proficiency. These annotators were recruited through a combination of academic networks and online platforms, ensuring a diverse range of backgrounds and experiences. Each annotator was thoroughly briefed on the evaluation criteria and the objectives of the study. We adapted the Mean Opinion Score (MOS) protocol, utilizing a 5-point Likert scale, to assess two key aspects: the Naturalness (N-MOS) of turn-taking and the Meaningfulness (M-MOS) of dialogue content. This approach allowed us to gather quantitative data on the perceived quality of the generated dialogues. **Q6: More Representative Demos.** We provide more dialogue examples at [Demo Page](audio-3059.pages.dev), which illustrates the performance of our NTPP model across several key scenarios, including multi-turn dialogues, "Interruptions & Reflective Pause" evaluation, and speech continuation. The audio samples demonstrate how NTPP handles these different situations. **Q7: More Explanations on Two-Stage Training.** We describe our two-stage training process in Section 5.1 (Dataset). The first stage involves training a text-based LLM on single-channel speech token sequences. In the second stage, we introduce our newly proposed NTPP paradigm for dual-channel speech learning, building upon the SpeechLM obtained in the first stage. The ablation results in Figure 9 aim to support a simple claim: the first-stage training is essential before the second-stage NTPP training. We have further clarified this ablation study with a slightly modified figure [Figure 9](audio-3059.pages.dev/figure9). This updated figure includes three curves: Full Two-Stage NTPP (NTPP), NTPP without the first stage (NTPP w/o-1), and NTPP without the second stage (NTPP w/o-2). Please feel free to ask if you have any further question.
Summary: This paper proposes a next-token-pair prediction approach for modelling a dual-channel streamable Speech LM. The authors propose to use an autoregressive LM to model both speakers in a conversation, predicting token pairs from both channels at each timestep. The model is trained using a two-stage pipeline and compared against seminal works like dGSLM and Moshi. Claims And Evidence: The paper makes several claims regarding the advantages of their proposed architecture. However, there are some issues where the claims are not fully supported by evidence: 1. Encoder-decoder inefficiency (Lines 70-72): The authors claim that encoder-decoder models are inefficient and not scalable but provide no justification or supporting references. Given that Moshi, which (according to the authors) also follows an encoder-decoder structure, achieves the same goals as this paper, a more detailed explanation and citations are needed to support their claims. 2. (Lines 90-105): The authors list four advantages of their approach, but Moshi already exhibits three of these (points 2, 3, and 4) during pretraining. The differences between NTPP and Moshi should be more clearly articulated. Also, Moshi too uses decoder-only architecture as the inference is done by Helium LLM. 3.The performance of the RVQ-based tokenizer is not compared against Mimi or other streaming-compatible tokenizers, making it difficult to assess its effectiveness. 4. The performance improvements claimed in Table 1 and 2 cannot be properly verified due to insufficient details about baseline models (including the "cascaded" model) and evaluation methodology. 5. The paper highlights that its approach outperforms Moshi in inference latency as number of turn-taking increase, but since Moshi uses a Helium LLM with a smaller context window than Llama 3.1 8B, this might be an unfair comparison. Methods And Evaluation Criteria: The authors use relevant benchmarks like IPU, but some important details are missing: 1. The 14k-hour dataset composition is not clearly specified: Which three datasets were used? What is their language composition? Are they purely single-channel or multi-channel datasets? 2. The paper compares against Mistral-7B and Gemma-7B, but Qwen 2.5 would have been a more appropriate choice due to its stronger audio capabilities. 3. The paper does not use StoryCloze or ZeroSpeech Challenge. Other works like dGSLM and Moshi use these, so their absence weakens the evaluation. 4. The distinction between different NTPP variants (0.1, 0.5, and 0.9) in Table 1 is never explained, making it impossible to interpret these results meaningfully. There's also no mention of cascaded model anywhere in the main text. Many more crucial details are missing from the paper which make it difficult to assess the validity of claims. Theoretical Claims: n/a Experimental Designs Or Analyses: 1. The comparison against Moshi is unclear, especially given that the open-source Moshi checkpoint is fine-tuned on one speaker and always begins with a welcome message. The paper doesn't specify how this was handled while conducting evals. 2. The "one-stage" vs. "two-stage" ablation study in section 5.8 lacks clear definition. It's not clear to me what the one-stage approach is - details are missing from the paper. 3. Regarding Figure 7 - it's not clear who the "judge" is or the methodology used, making it difficult to assess the objectivity of these results. 4. Vocoder streaming compatibility is not addressed. How did the authors make the vocoder compatible for streaming? 5. Figure 6 is included but never discussed in the text and lacks a legend, making its purpose and meaning unclear. 6. Table 2 has limited comparisons. I would have also wanted to see comparison against Llama-Omni or SpeechGPT. Supplementary Material: Yes - both: code and Appendix The code seems incomplete, as only pretraining scripts are provided, while finetuning and inference scripts are missing (as per README). Could the authors clarify this or provide the missing components? Relation To Broader Scientific Literature: There has been a huge interest in speech LMs, especially ones that can process dual-channels like Moshi, GPT 4o, etc. These models have application in Voice Agents and hence, this paper too tries to make a contribution towards such models. I appreciate the authors' intention towards open-sourcing the codebase and checkpoints. Essential References Not Discussed: - Other Strengths And Weaknesses: ## Strengths: - Proposed architecture looks simple to implement and can also work with new LLMs out-of-the-box. ## Weaknesses: - Key implementation details are missing, making it difficult verify the evals and draw comparisons against Moshi. - There are several references and clarity issues. Moreover, figure 6 is added but never discussed. [See "Questions for Authors" section below] - I also have concerns around RVQ and Vocoder [See "Questions for Authors" section below] - No standard benchmarks like StoryCloze or ZeroSpeech challenges were used for linguistic quality evaluation Other Comments Or Suggestions: 1. Lines 352-353: Text says Appendix 1, but this seems to be missing (likely Appendix C.2). The reference should be fixed, and additional missing details should be included. 2. In Table 1, cascaded model has the lowest delta IPU of 1.3s yet the NTPP's values are highlighted. 3. Line 186, column 2. Typo: "modelling Figure 1" Wrong reference. Should be Figure 3.5. Questions For Authors: 1. The RVQ implementation has ambiguity: it's unclear how Z_a and Z_b are handled in the RVQ case, specifically whether W_q is multiplied by d (calculated in equation 12) and how the model learns different embedding values for different speech tokens. 2. What are the three speech datasets that make up the 14,000 hours of training data? Please provide details about their language compositions and other characteristics. 3. What is the "cascaded model" referenced in Table 1, and how was it implemented? Also, please clarify what NTPP 0.1, 0.5, and 0.9 represent in this table. 4. How did you evaluate Moshi given that the open-source checkpoint is fine-tuned on one speaker (moshiko / moshika) and always begins with a welcome message? 5. How does your RVQ tokenizer compare to other streaming-compatible tokenizers like Mimi in terms of performance? What was the data used to train it? 6. Is your vocoder streaming-compatible? What are its encoding/decoding rates and how many frames of audio are synthesized in each time step? 7. Could the latency differences with Moshi be attributed to differences in context window size rather than architectural advantages of your approach? 8. What start tokens are used, and how exactly did you prepare the pretraining and finetuning data? 9. What does Figure 6 represent, and why is it not discussed in the main text? 10. Lines 293-294 say "To encode the relative positional information of tokens, all three leverage rotary positional encoding (Su et al., 2024a)." - Can authors elaborate more on this? Isn't equation 8, 9 and 12 being used for embeddings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Claims and Evidence **Q1: The inefficiency of Encoder-decoder architectures** The efficiency of decoder-only model is supported by various literature. For example, FlashAttention (Dao et al., 2022) and Parallelized decoding (Kumar et al., 2020). They show how decoder-only models optimize memory and speed compared to encoder-decoder alternatives. **Q3: StoryCloze linguistic quality evaluation** | Model | sStoryCloze$\uparrow$ | |---|---| |Spirit-LM| 61.0 | |Moshi| 60.9 | |NTPP| 61.4 | # Experiments Designs or Analyses **Q1: The comparison between Moshi, NTPP our other approaches.** We illustrate the comparison across different approaches through the following table: | Different Models | Speaker-Independent | Encoder-free | VAD-free | Single KVCache | End-to-End | |---|---|---|---|---|---| | dGSLM | $\checkmark$ | | $\checkmark$ | | $\checkmark$| | LSLM | | | | $\checkmark$ |$\checkmark$| | Moshi | | | $\checkmark$ | |$\checkmark$| | **NTPP(ours)** | $\checkmark$ | $\checkmark$ | $\checkmark$ | $\checkmark$ |$\checkmark$| In general, only our NTPP simultaneously possess the five important properties compared to other approaches. Moshi is not speaker-independent and not encoder-free. Additionally, it requires two KV caches. (1) This Moshi architecture is not “speaker-independent”. dGSLM indicates that the model should learn the joint distribution $p(s^{b},s^{a})$ instead of any conditional distribution $p(s^{b}|s^{a})$ or $p(s^{a}|s^{b})$. To satisfy this property, dGLSM adopts a two-tower transformer that follows the Siamese encoder-decoder architecture to estimate $p(s^{b},s^{a})$. Moshi and LSLM are only learning the conditional distribution $p(s^{b}|s^{a})$, which inherently has the generalization issue when we just simply switch speaker's roles. Also, this problem cannot be easily solved by training the paired data alternatively since learning $p(s^{b}|s^{a})$ and $p(s^{a}|s^{b})$ sequentially is not equivalent to learning $p(s^{b},s^{a})$ directly. (2) Moshi is not an encoder-free model since it adopts the RQ-transofmer architecture. Further clarifications are in the Reviewer Ubh3 Q1 response. (3) Moshi requires storing two separate KV caches for its two decoder-only transformers, reducing memory efficiency. In contrast, our NTPP, as a typical decoder-only architecture, requires only a single KV cache, making it more memory-efficient. **Q2: Two-Stage Ablation Studies** We refer this reviewer to read **Reviewer Ubh3** Q7. **Q3 & 4: Figure 7 judge and more clarifications on baselines and evaluation metrics in Table 1 & Table 2.** To save space, we refer this reviewer to read **Reviewer Ubh3** Q3. **Q6: Llama-Omni and SpeechGPT Evaluations.** Llama-Omni and SpeechGPT, as single-turn QA audio model, face incompatibilities with these real-time streaming multi-turn conversational benchmarks. Fisher and CANDOR require real-time handling of dynamic context shifts, interruptions (barge-ins), and mid-conversation pauses—capabilities inherently absent in VAD-dependent architectures. **Supplementary Material & Suggestions** We have updated the demo and code [Page](audio-3059.pages.dev). Upon acceptance, we will release the full code and model weight publicly and create a well-organized project page for it. The reported digit for the cascaded model should be 4.3 instead of 1.3 (a typo). We will promptly correct this in the revised version. # Questions **Q1-Q2: RVQ and Vocoder Implementations.** Regarding the RVQ tokenizer and Vocoder, we followed the training settings as specified in Soundstream (Neil et al. 2021) and HifiGAN (Jungil et al. 2020), respectively. Additionally, we have updated the training code with detailed training procedures. **Q3: Cascaded models and Table 1 Explanation** Since the performance of the Cascaded model relies on VAD, we use "Cascaded model" as a general term to represent all non-interactive models, including multi-modal approaches and LLM-based cascading models. This class models do not directly learn the generation of dual-channel speech, so we do not introduce them in details. We will add short descriptions in the revised version. For the interpretation of NTPP subscripts in Table 1, please refer the Q3 response for the Reviewer Ubh3. **Q4-Q6: Dataset details, Moshi Evaluation Details and Tokenizer comparison**: We put the corresponding content in [Page](audio-3059.pages.dev). **Q7: Inference Latency Analysis** To save space, we refer this reviewer to read Q4 response for the Reviewer Ubh3. **Q8: start tokens.** Instruction format: ``` <bos><Model_0_0>...<Model_0_k><Human_0_0>...<Human_0_k><eos> ``` **Q9: Figure 6 Discussions & Q10: Further explanations on positional encoding** We put Figure 6 discussions and further explanations on positional encoding in [Page](audio-3059.pages.dev). We hope the above responses address your concerns. If you have further questions, please feel free to ask. --- Rebuttal Comment 1.1: Comment: Thank you for the responses to my concerns. While the authors have addressed several issues in their demo page (note that the link for demo page provided on this forum is incorrect - I had to use the link mentioned in the paper), there are still important points that need clarification: 1. **sStoryCloze Evaluation**: The authors' comparison appears to use Moshi's numbers after multi-stream instruct with synthetic voice variant from Table 7 of the Moshi paper. This is inappropriate for a fair comparison, as NTPP isn't fine-tuned on one synthetic voice. Comparing with Moshi's multi-stream variant (which achieves 62.7) would be more appropriate. This higher baseline score also then casts doubt on NTPP's claimed conversational capabilities. Please address this discrepancy. 2. **Training Data Inconsistency:** The paper mentions using 14,000 hours of training data in stage 1, but the demo page states 140,000 hours. This 10x discrepancy is extremely misleading and hasn't yet been updated in the paper. 3. **Figure 6 Presentation:** While I appreciate the addition of Figure 6 discussion on the demo page, this information should be included in the paper itself (or at minimum, in an appendix). The current state of the paper's presentation needs significant improvement to meet the conference's standards. 4. **Latency Analysis:** Thank you for providing latency comparisons with similar context window sizes for Moshi and NTPP-Llama2. However, the numbers appear quite close. Please include standard deviations alongside the mean values for the 5 turn-taking conversations to provide a complete picture of the performance differences. 5. **Tokenizer Evaluation:** The comparison table on the demo page lacks information about the evaluation methodology. What benchmark or dataset did the authors use to evaluate the "meaningfulness" and "naturalness" of these tokenizers? Without this context, it's difficult to assess the validity of the comparisons. 6. **Moshi Evaluation Methodology:** The authors still haven't addressed how they evaluated Moshi given that the open-source checkpoint is fine-tuned on one speaker and always begins with a welcome message. Did they wait for this greeting to complete before starting the evaluation? Overall, I feel that both, the presentation quality and methodological clarity need significant improvement inorder to raise my score to atleast a weak accept. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your further comments and questions, particularly your efforts in reading our demo page. We apologize for the confusion regarding the demo page link. The issue arises because the OpenReview platform automatically adds the "openreview.net/" prefix to the link. The correct link for the demo page is https://audio-3059.pages.dev. Here are our responses: 1. Thank you for raising this question. For a fair comparison, the **audio-only** result reported in Table 7 of the Moshi paper should be used, as NTPP does not include access to text alignment training data. This text alignment data is proprietary to the Moshi team and is not publicly available, which prevents us from training a text-based version of NTPP (i.e., by incorporating an additional text channel). The audio-only StoryCloze score in Moshi is **58.7**, which is notably lower than our NTPP score of **61.4**. We believe this performance gap could be further widened with large-scale single-channel pretraining comparable to Moshi’s setup. We initially reported the score from the synthetic voice variant because Moshi only publicly released the model weights for that version. | Model | sStoryCloze$\uparrow$ | |---|---| |Spirit-LM| 61.0 | |**Moshi (Audio only)**| **58.7** | |Moshi (Text and Audio, with synthetic data)| 60.9 | |**NTPP**| **61.4** | 2. The correct number should be 140K, as shown in the latter part of the demo page. We apologize for the inconsistency. We were aware of this typo; however, the paper cannot be updated at this stage. Rest assured, this will be corrected in the final version. We sincerely appreciate your careful reading and will ensure that all figures are accurate in the camera-ready submission. 3. Yes, we will replace the current Figure 6 with the updated, polished version on the demo page in the camera-ready submission. This rebuttal will be made publicly available and we will diligently uphold our promise. 4. We respectfully emphasize that NTPP-Llama2 reduces latency by 21.67% compared to Moshi, which we believe cannot be considered "quite close." Additionally, we have included the standard deviation alongside the mean values for five turn-taking conversations in our latency analysis, providing a more comprehensive view of the performance differences between Moshi and NTPP-Llama2. | Model | Audio response latency for 5 turn-taking(ms) | Standard Deviation | |---|---|---| | Moshi | 261.6 | 9.75 | | NTPP-Llama2 | 204.9 | 6.41 | 5. We would like to clarify that the "meaningfulness" and "naturalness" metrics are evaluated using the same settings as outlined in Table 2 of our paper. These metrics are based on the average results from the Fisher and CANDOR test sets. Furthermore, we have expanded our evaluation by incorporating turn-taking benchmarks to provide a more comprehensive comparison of the tokenizers. We train the NTPP model on Fisher dataset with different audio tokenizer. We split dataset by conversations into 6:2:2 for train, validation, and test respectively. We evaluated our model on the in-domain switchboard test set and additionally on 2 out-ofdomain (OOD) datasets: the Switchboard Corpus[1] and the Columbia Games Corpus[2] . |Audio Tokenizer | Meaningfulness $\uparrow$ | Naturalness $\uparrow$ | ROC-AUC of turn-taking label in Fisher $\uparrow$ | ROC-AUC of turn-taking label in Switchboard $\uparrow$ | ROC-AUC of turn-taking label in Columbia Games $\uparrow$ | |---|---|---|---|---|---| |Mimi| 4.05 | 4.28 | 83.22 | 84.38 | 82.25 | |Vanilla RVQ|3.95|4.15| 83.05 | 83.85 | 81.90 | **Audio Quality Metrics: Meaningfulness & Naturalness** **Interaction Response Accuracy: ROC-AUC of Turn-Taking Labels on two datasets Fisher & Switchboard** Our Vanilla RVQ model achieves competitive performance despite being pre-trained without any text data. We believe this distinction underscores the potential of our Vanilla RVQ model in scenarios where text data may not be available or feasible to use. 6. Yes, we do wait for the greeting to complete before starting the evaluation. This ensures that the initial welcome message does not interfere with the latency measurements and allows for a more accurate assessment of Moshi's performance during turn-taking conversations. **End of Response:** We believe these improvements thoroughly address all the concerns raised and significantly enhance the overall quality of the paper. If so, we would greatly appreciate your consideration in updating your rating. Thank you once again for your valuable suggestions, thoughtful feedback, and especially your patience in reviewing the detailed responses. [1]: Godfrey, John J., Edward C. Holliman, and Jane McDaniel. "SWITCHBOARD: Telephone speech corpus for research and development." Acoustics, speech, and signal processing, ieee international conference on. Vol. 1. IEEE Computer Society, 1992. [2]: Gravano A, Hirschberg J. Turn-taking cues in task-oriented dialogue[J]. Computer Speech & Language, 2011, 25(3): 601-634.
Summary: The paper presents a method for improving the conversational capabilities of speech language models using dual-channel spoken dialogue learning. It introduces a Next-Token-Pair Prediction (NTPP) approach within a decoder-only transformer architecture, enabling the model to predict both speakers' next speech tokens simultaneously. The study evaluates this method using benchmarks and analyzes its performance in turn-taking dynamics, interruptions, and response timing, comparing with existing models such as Moshi and dGSLM. ## update after rebuttal I have read the authors' reactions to my review. My recommendation was already very positive and I'm keeping it. Claims And Evidence: The claims are well supported by the evidence from the experiments carried out. The details of some of the evaluations are a bit sparse though (see below). A minor problem is the claim that pretraining is done in a textless fashion. While defensible in a narrow sense, this is misleading as the starting point of the pre-training is a trained textual LLM. Methods And Evaluation Criteria: The evaluations are appropriate in general. Theoretical Claims: NA Experimental Designs Or Analyses: The evaluation in section 5.5 is described only very briefly, lacking most details needed to understand its value. It's not clear what these automatically generated interactions sound like, or how exactly participants were asked to rate them. Supplementary Material: I briefly read through the supplementary material but did not review it in detail. Relation To Broader Scientific Literature: The paper's main contribution is the introduction of a decoder only modeling for dual channel speech data via next token pair prediction. Essential References Not Discussed: None identified. Other Strengths And Weaknesses: I don't understand what the two audio samples on the demo page are supposed to illustrate. They don't seem related to modeling dual channel audio in any obvious way. Other Comments Or Suggestions: It may be useful to discuss how feasible it would be to extend this approach to more than two channels. Conversations with more than two participants are common, and people handle then easily, and thus thus capability would be useful to have in a dialog system. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing the value of our work. Here are our responses: **Q1: Textless Pretraining.** Thanks for your question. In our paper, "textless pretraining" specifically refers to the fact that neither the first-stage single-channel speech pretraining nor the second-stage dual-channel learning incorporates text alignment, unlike some existing SpeechLM models. We recognize the potential ambiguity and will provide a clearer explanation in the camera-ready version. **Q2: Section 5.5 Further Clarifications.** In our study, we aimed to assess the performance of existing audio models in real-world scenarios, particularly focusing on their ability to discern when to start and stop speaking. This is crucial for improving user experience, as current models often rely on Voice Activity Detection (VAD) to determine speech initiation, which can lead to interruptions whenever any sound is detected. We generated 200 thoughtful pauses (e.g., reflective pause or conversational hesitations like "Hmm... let me think...") and 200 interruptions (e.g., abrupt interjections: “But wait—[interrupted]”) using GPT-4o, ensuring contextual diversity. Audio was synthesized via ChatTTS with explicit silence annotations. We compared the performance of our model against two other models, Cascaded and Moshi, by measuring the proportion of instances where the VAD model correctly identified whether speech should occur. To ensure alignment between generated audio and labels, we employed human judge as the "gold standard". The closer this proportion was to human judgment, the better the model's performance. Figure 7 shows that our model is closer to human performance than Cascaded and Moshi. We have also added a demonstration of the corresponding audio effects on the [Demo Page](audio-3059.pages.dev). **Q3: About the two audio samples in demo page.** We provide more dialogue examples at [Demo Page](audio-3059.pages.dev), which illustrates the performance of our NTPP model across several key scenarios, including multi-turn dialogues, "Interruptions & Reflective Pause" evaluation, and speech continuation. The audio samples demonstrate how NTPP handles these different situations. Each audio sample on the demo page provides a clear representation of the output dynamics for each channel at any given moment. This setup allows users to observe how the model manages dual-channel audio, particularly in terms of when to initiate and cease speech. By listening to these samples, users can gain insights into the NTPP's ability to handle interruptions and reflective pauses effectively, showcasing its practical application in real-world interactions. **Q4:About Multi-channel system (more than channels)** Thank you for your suggestions. Expanding our approach to multi-channel conversation modeling is definitely on our roadmap, and our method can naturally be extended to this more complex setting. However, due to the lack of publicly available datasets, we are not yet fully prepared for this extension. Nonetheless, this remains a promising direction for our future work.
null
null
null
null
null
null
null
null
Implicit Language Models are RNNs: Balancing Parallelization and Expressivity
Accept (spotlight poster)
Summary: This paper introduces implicit SSMs, which are a parameter tied form of SSM that can be run for arbitrarily many self-iterations until convergence. They propose training implicit SSMs in a scalable manner using phantom gradients from the implicit layers literature. They demonstrate the ability of implicit SSMs to state track on hard OOD settings and in language modeling. Claims And Evidence: * The discussion around equations (6) and (7) implies that these equations converge to a fixed-point. However, is this correct? Not all iterative equations converge, and no proof of this claim is given. * > Theoretically, we show that implicit SSMs implement the non-linear state transitions of RNNs * This claim is formalized in Theorem 1 * I am concerned about the shift from $h_t^*$ to $h_{t-1}^*$ in the proof, see my comments in the "Theoretical Claims" section * > Empirically, we find that only approximate fixed-point convergence suffices * Figure3, Mid supports this claim * Lines 1080-1 (Figure 10 caption) > The implicit Mamba2 retains its performance as the story length increases, whereas the explicit Mamba2's performance declines * This claim is contradicted by the performance of the explicit 3 layer in Figure 10c, which on the balance appears to have better performance as story length increases. * Lines 331-4 > The implicit Mamba2 models maintain their perplexity as sequence length increases, whereas the baseline Mamba2 models exhibit an increase in perplexity with longer sequences * Yes, demonstrated in Figure 4 * Effective Duality between Simultaneous Mode and Sequential Mode * Yes, this claim is supported Methods And Evaluation Criteria: Yes, the proposed methods and evaluation are excellent. * The OOD task in Figure3 was a great addition to the literature * The use of the CatbAbI dataset was a good idea Theoretical Claims: _Theorem 1 and Proof in Appendix B_ * This is a bit of a nit / more of a notational question, but in equation (14), wouldn't it be more correct to write $\dfrac{d h_t^*}{d h_{t-1}^*}$ (as opposed to $\dfrac{\partial h_t^*}{\partial h_{t-1}^*}$, as currently written)? Looking at equation (13), it seems to me like $\dfrac{\partial h_t^*}{\partial h_{t-1}^*} = \Lambda(z_t^*, x_t)$, while the full derivative contains the off diagonal correction terms. * **A more important point that needs to be discussed more precisely and clearly** is the choice of $h_{t-1}^*$ as opposed to $h_t^*$ as the argument of $\varphi$. In particular, on line 785, it is stated without justification that $z_t^* = \varphi(h_{t-1}^*, x_t, \theta)$. However, in equation (7), $z_t^{(s)}$ is a function of $h_t^{(s)}$ and not of $h_{t-1}^{(s)}$. This distinction between $h_{t-1}^*$ as opposed to $h_t^*$ is important because if $\varphi$ is actually a function of $h_t^*$, then the off-diagonal terms in equation (14) disappear and the Theorem is incorrect as stated. Therefore, I think it is very important for the authors to **add a lot more rigorous detail** about why $\varphi$ takes $h_{t-1}^*$ as an argument instead of $h_t^*$, especially because this shift is different from the set up for equations (6) and (7) * I also think it is very important that the authors provide a numerical check of equation (14) in their trained models. I.e, in their trained models, what actually are the Jacobians $\dfrac{\partial h_t^*}{\partial h_{t-1}^*}$. Do they correspond within numerical tolerance to the RHS of equation (14)? Or not? Please include such a numerical check in your rebuttal. _Phantom Gradient (Section 2.3)_ Is the minus sign in equation (4) correct? As I understand things, we know that $$ G(\Phi, x, \theta) = 0.$$ Thus, taking derivatives wrt $\theta$, it follows that $$ \dfrac{\partial G}{\partial \theta} + \dfrac{\partial G}{\partial z} \dfrac{\partial \Phi}{\partial \theta} = 0.$$ Now, we know that $\dfrac{\partial G}{\partial \theta} = - \dfrac{\partial F}{\partial \theta}$, and so plugging in it follows that $$ \dfrac{\partial G}{\partial z} \dfrac{\partial \Phi}{\partial \theta} = \dfrac{\partial F}{\partial \theta}.$$ Therefore, it would seem that there is a sign error in equation 4. Experimental Designs Or Analyses: * Will the code be published? It is difficult to check the experiments without code. * How was $\lambda$ set for the Phantom gradients (see equation 5)? This choice of hyperparameter does not seem to be discussed anywhere in the paper, even though there is a very good treatment of other experimental design choices. What happens if this hyperparameter $\lambda$ is varied? * In Figure3 Right, I am concerned that the explicit models did the worst on train accuracy. Is there an explanation for this behavior? I would have thought that 16 layers of mamba would be enough to memorize to memorize a sequence length of length 256, i.e. get to at least 90% train accuracy. Is there any way to explain this phenomenon or provide evidence that the explicit models are being trained to the utmost on the synthetic state tracking task (Figure 3). * I would really like to see more reporting of wall clock time and memory usage in the experiments. I discuss wall clock time at length in "Other Strengths and Weaknesses" Section. As for memory, I was extremely impressed by the batch size of 1M tokens for the language modeling experiments. I want to know how much memory was required for training with a batch size of 1M tokens. Such an addition of max memory needed for training should be added to Table 6. * Broadly speaking on the Language Modeling experiments (Table 1, Table 5, etc), I wasn't sure if a proper ablation was done for depth. For example, what would happen if the explicit models were made deeper but less wide (to preserve parameter matching). Their depth could be scaled by the number of inference steps reported in Table 2 (are those inference steps means over tokens in Table 2?). * The large scale language modeling tasks in Table 1 are good and show modest improvement of implicit over explicit models, but they do not blow away the explicit models. Are there any tradeoffs, i.e. downsides of using an implicit model, in terms of memory, compute, wallclock time, or some other metric? An explicit bolded paragraph on "Limitations" would be a nice contribution to contextualize the method and help practitioners. Supplementary Material: I reviewed all of the appendices. There does not seem to be provided code so I could not review that. Relation To Broader Scientific Literature: This paper builds on the implicit layers line of work to create an implicit model that can actually perform well on language tasks! Essential References Not Discussed: The authors cite Lim et al '24, but note that their method incurs cubic cost in terms of state size. The authors may also wish to cite * Gonzalez et al '24, "Towards Scalable and Stable Parallelization of Nonlinear RNNs," https://arxiv.org/abs/2407.19115 which is an extension of Lim et al but uses quasi-Newton methods to avoid the cubic cost in state size. The authors may also consider citing the following paper which started the deep SSM line of work * Gu et al '21, "Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers," https://arxiv.org/abs/2110.13985 In particular, see Appendix C. In this paper, Gu et al prove that many layers of an SSM can approximate a Picard iteration (a fixed point iteration). The methods in this proposed paper are effectively doing a Picard iteration as I understand it, so some comparison with the theory developed by Gu et al '21 may be useful to the academic community. While not required because of the ICML policy on concurrent work, the authors might considering citing and discussing the concurrent work * Geiping et al '25, "Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach." https://arxiv.org/abs/2502.05171 They also uses self-iterations, but with attention layers instead of SSM layers. They also apply to language modeling. A robust discussion of similarities and differences in approach would be a great resource for the community. Other Strengths And Weaknesses: On the balance, I think this is a great paper. However, I have various minor concerns scattered throughout this review that I think should be addressed. More importantly, I have a major concern about **wall-clock time** that I elaborate on in this section. I really need to see this question of wallclock time directly addressed before I can advocate for publication. _Synthetic State tracking task (Figure 3)_ For example, in all of Figure 3, the implicit models are granted unbounded test iterations (what actually is the halting condition? I do not think the halting condition is stated explicitly in the paper). What is the wall-clock time for inference of the implicit models at test time in Figure3, compared to the wall clock time of Mamba2 (explicit) for inference? Also, I'm curious about the fairness of the Mamba2 baseline. In Figure3 right, the explicit and implicit models have matched train time depth, which is very good. But what happens if we match test time depth? I.e., report the average number of test time iterations used by implicit mamba in Figure3 right. And then give that many layers in depth for both training and test to explicit Mamba2 (go ahead and parameter match still). **I would really like to see this fair baseline for Mamba2 before concluding that explicit layers are limited on this task.** However, this OOD task with increasing the number of $S_5$ tokens was extremely clever and a great addition to the literature. _catbAbI task (Appendix D.2)_ What is the wall clock time, both for training and for test, of the implicit and explicit mamba2 models (1,2, and 3 layers) on the catbAbI dataset. As demonstrated in Figure10a, a 3 layer explicit mamba2 has almost identical performance to a 1 layer implicit mamba2. However, if implicit mamba2 takes dramatically longer on wallclock time, it is difficult to recommend the use of 1 layer implicit (with a large number of fixed-point iterations) over a 3 layer explicit mama2. How should I reconcile Figure 10a and Figure 10c however? On 10a it looks like explicit and implicit 3 layers almost always have similar performance; while on Figure10c it looks like implicit 3 layer is always better than explicit 3 layer. But how would a parameter matched explicit 6 layer do in comparison, both in test accuracy and on **training and test time wall clock time.** _Language Modeling task_ It would be helpful to add wallclock time (for both training and inference) to Table 6 (in addition to peak memory requirement for training and inference, see discussion in "Experimental Designs or Analyses.") _Extrapolation Advantage_ A main selling point of this paper (especially on the synthetic state tracking experiment and on the length extrapolation aspect of language modeling) is that implicit models seem to be better on extrapolation tasks (proportion of hard tokens, length of sequence) at test time. **Can you provide any theoretical perspective on why implicit models are better at extrapolation?** Doing so would really strengthen the paper! Other Comments Or Suggestions: Style * The capitalization in this paper is a bit nonstandard at times, i.e. sometimes too generous with capitalization (eg "Illusion of State", "Word Problem", "Implicit Function Theorem"; also capitalization of a sentence fragment after a colon), but then other times doesn't capitalize "theorem" when it should. The authors may wish to review standard English capitalization style guides. Typos * In equation 2, I think it should be $h_t$ and not $h_{t-1}$ * Line 434: the quotes around 'hard' are a bit ugly. * Line 662: "it's" should be "its" * Line 699 > Monoid whose elements can be inverted have a particularly right structure * Perhaps this sentence should read: "Monoids where every element has an inverse are called \emph{groups}." Small Suggestions * Another nit, but in lines 199-202, the authors write > The Illusion of State reveals that SSMs cannot simulate arbitrary finite state machines * wouldn't it be better style to not capitalize illusion of state? * I think SSMs can in fact simulate arbitrary finite state machines, just not without depth growing in the sequence length. I think the sentence should instead read "\citet{Merrill24} shows that SSMs cannot simulate arbitrary finite state machines with constant depth." * Another nit: line 202-4 > A hard state tracking problem in the sense that all state tracking problems can be reduced to it * not the most elegantly phrased sentence * strictly speaking, not _all_ state tracking problems can be reduced to $S_5$ (consider for example $\mathbb{Z}_7$). Questions For Authors: 1. How is Figure 2B related to equation (8)? It would seem that every token in depth still needs to propagate across the sequence length in order to simulate equations (6) and (7), which converge to (8). Why is only $h_t^*$ needed to be passed on, instead of all $h_t^{(s)}$? 2. Would you please elaborate on the following sentence in your conclusion? > While implicit models lift the limitations of state-of-the-art language models, self-iteration comes at a cost that only amortizes over the long tail of natural language 3. In Figure 3 (Left and Mid), there is a large discrepancy between the best run and all runs. Would you please elaborate on what is causing this difference? Is there any way to see from train performance alone which run will yield the highest test accuracy? Also, how many self-iterations are used on average at test time in Figure 3 (all 3 panels)? And how many self-iterations are used during training for Figure 3 Left? 4. Could more detail be provided about the following sentence on lines 251-5? > Interestingly, the number of test time self-iterations is quite similar for the models trained with different upper bounds on the training time self-iterations, hinting that the models learn similar algorithms How many test time self-iterations is this? 5. Why in Figure3 Right does unrolled have better training performance than phantom gradients, but the opposite relation is true for test? 6. For Figure3 Right, an important missing baseline would seem to be a truncated backpropogation, i.e. only backpropogate through 8 steps (as is done in Geiping et al 2025, ""Scaling up Test-Time Compute with Latent Reasoning", https://arxiv.org/abs/2502.05171). This approach may be somewhat intermediate in the train/test tradeoff between phantom gradients and full unrolled backpropogation, and would also be constant memory. I think you are already doing this on the CatbAbi task, see lines 927-8. 7. In the catbAbi experiments (D.2), did you ever try an ablation where you unrolled the entire way through, and never switch to self-iteration fixed-point search? In Figure 8, there is a sharp dip in validation accuracy at 5000 steps when you switch schedules; but otherwise, it doesn't look like the validation trajectory was changed very much as a result of the change in training procedure. Moreover, I thought that the point of Figure 3 Mid was that fixed number of self-iterations at train time was an acceptable way to train the model. 8. I must not be understanding something about Figure 9 (maybe the y-axis should be renamed from "Validation steps" to "number of self-iterations"). Still, I thought that based on lines 927-8 that 32 self-iterations would be used on the first 5000 gradients steps, but it looks more like 4. How was this number chosen? And was does "trained for 5000 steps in unrolling mode, utilizing 32 steps with normal gradient checkpointing" mean? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # General comment We thank the reviewer for lots of insightful feedback, which significantly helps us to revise our manuscript. We appreciate the positive evaluation ("On balance, I think this is a great paper") as well as the engagement expressed by many detailed questions. In face of the space limit (5k characters), we have to focus on selected questions. # Theoretical contributions While we are not aware of theoretical convergence guarantees, we checked that our models converge to fixed points on all datasets that we present in the paper. Models that reach fixed points have the properties claimed in Section 3. > $h^*_{t-1}$ vs $h^*_t$ in $\varphi$. There is a typo in Eq. (7), which should be analogous to Eq. (2), where the dependency on $h_{t-1}$ is correctly stated in line with standard RNN formulations. Consequentially $\varphi$ takes $h_{t-1}^*$ as an argument. > Is the minus sign in equation (4) correct? No, thanks for spotting this! > Please include such a numerical check in your rebuttal. Incorporating the correct sign, we compared the RHS of Eq. (14) at the fixed point with the unrolled autograd (AD) Jacobian. The absolute difference between the AD Jacobian and Eq. (14) is three orders of magnitude smaller than the values in the Jacobian. # Addressing wall-clock time We have included the WCT for all datasets and the memory of language models in our response to __AqCH__. ## Synthetic state tracking The halting condition is convergence of $z$ (relative diff. of 5%). - setting an upper limit of 4 self-iterations takes 26s on the test set (4k examples) and gets 97.8% accuracy on the p=0.5 distribution. - an upper limit of 16 self-iterations takes 35s and gets 99.8%. Note that iterations terminate after 6 steps on average due to the above halting condition. - the explicit model with 16 layers takes 37s and gets 1.5%. We believe that this is a fair baseline. > this OOD task with increasing the number of $S_5$ tokens was extremely clever and a great addition to the literature. thanks! ## catbAbI The 1-layer implicit model trains faster than the 3-layer explicit model, but takes slightly more time during inference (see __AqCH__). The discrepancy between Figures 10a and 10c arises because Figure 10a averages accuracy across all story lengths per task, obscuring differences related to the distribution of story lengths that varies a lot across tasks and is also non-uniform per task. > how would a parameter matched explicit 6 layer do in comparison According to Fig. 1, a deep enough model will sufficiently track state, which should also hold for catbAbI. We view catbAbI as an intermediate sanity check between our synthetic task and the larger language models, and hence did not investigate this further. ## Language Modeling We will add the values reported to __AqCH__ to table 1 and table 6. # Questions for authors Q1: It is an empirical contribution of this work that passing on $h_t^*$ suffices for inference, which enables const memory language generation. Intuitively, DEQs are all about fixed points, and “path independence” has been observed in prior works. Q2: Self-iteration introduces additional FLOPs. Many practical problems might already be sufficiently addressed by the baseline models. Similar to test-time computation, self-iteration pays off only for certain problems. Q3: There are multiple runs that overlap with the best run. Perhaps a box-plot would be a more appropriate choice. Left was trained with 32 iterations. Q4: about 6 iterations (5-7 for different models) per token. Q5: It seems that differentiating locally around the fixed point and not along the full trajectory provides a stronger bias towards learning sequential problems. Q6: Phantom gradients are similar to the truncated backpropagation method used by Geiping et al., but uses the update rule $z_{t+1} = (1 - \lambda) z_t + \lambda f(z_t, x)$. For $\lambda=1$, the two algorithms match. Q7: We found that continuing the 4x unrolling for the complete training is not enough to achieve competitive accuracy. Q8: '32' is indeed a typo. Should be 4x unrolling till step 5000. # Further comments As requested by multiple reviewers, we will add a new section on limitations to the manuscript to discuss the moderate downstream task improvements in face of the larger wall-clock time. > Can you provide any theoretical perspective on why implicit models are better at extrapolation? Prior works indicate that implicit models have higher robustness to noise due to attractor dynamics. Viewing uninformative tokens as a source of noise (e.g. at longer sequence length) might explain the robustness. > provide evidence that the explicit models are being trained to the utmost on the synthetic state tracking task Fig. 1 (top left) shows that the number of layers required by the explicit model to solve the S5 problem grows linearly with the sequence length showing that explicit models are limited by their depth. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response. Your response to reviewer AqCH regarding memory and wallclock time is excellent and should definitely be included in your final paper. I think the explicit limitations section will be a great addition as well. In light of the limitations regarding memory and wallclock time, what would you say the practical benefits of the implicit SSM model are, against deeper but less wide explicit models? Would it also be possible to answer these two of my original questions? * Will the code be published? * How was $\lambda$ set for the Phantom gradients (see equation 5)? This choice of hyperparameter does not seem to be discussed anywhere in the paper, even though there is a very good treatment of other experimental design choices. What happens if this hyperparameter is varied? One more question: > Q1: Intuitively, DEQs are all about fixed points, and “path independence” has been observed in prior works. What prior works regarding "path independence" are you referring to? Could you emphasize this point (including citations) more in the main text? With the addition of the wallclock time experiments, this is an excellent paper that is way too interesting not to publish. I am raising my score to a 4. I really hope this paper gets in. I'm still not sure if the method is practically useful however, and would be interested in a candid discussion from the authors. --- Reply to Comment 1.1.1: Comment: We highly appreciate the reviewers engagement and their perception of our work. We will include the additional information requested by the reviewers in the main text where possible, and will provide a complete overview in the appendix. >In light of the limitations regarding memory and wallclock time, what would you say the practical benefits of the implicit SSM model are, against deeper but less wide explicit models? The ratio between depth and width does not seem to fundamentally affect the performance of language models (https://arxiv.org/abs/2001.08361). Fig. 3 (right) shows that explicit models trained at the same depth do not generalize to harder samples or longer sequences, despite achieving comparable training accuracy to implicit models. This shows that there exist problems (e.g. S5 word problem) where implicit models are able to capture the intrinsic algorithm, and explicit models trained with the same depth are not able to capture the intrinsic algorithm. > I'm still not sure if the method is practically useful however, and would be interested in a candid discussion from the authors. We acknowledge that problems where explicit models fail to capture the intrinsic algorithm might occur only rarely in natural language . While many tasks for chat-assistants might be perfectly well addressed with state-of-the-art models, problems like analyzing and completing code, architecting software, static program analysis, or sequence models for controlling industry processes might benefit from enriched expressivity. Recent studies uncover issues of transformers with constructing internal world models (https://arxiv.org/abs/2406.03689). The ability of implicit SSMs to implement arbitrary finite state machines could lead to improved world models, and we are excited to explore this property in future research. Furthermore, test-time computation and reasoning has been a major research direction over the past few months. Self-iteration can be viewed as reasoning in latent space which received recent interest (e.g. https://arxiv.org/abs/2412.06769). The role of implicit models in this direction has to be elaborated in future work. The structural similarity of concurrent works such as Geiping et al., 2025 suggests that our theoretical contributions hold for their model as well. We generally agree that GPUs (or any von Neumann architecture) are not a perfect match for self-iterations. However, token generation on GPUs is heavily bottlenecked by HBM memory bandwidth, leaving compute cores underutilized. We believe that there is space for optimizing token-generation in implicit models, particularly transformers, on GPUs by parallelizing the wave-front $s + t = \text{const}$, which would allow to amortize HBM memory transfer for multiple steps in the iteration by increasing the arithmetic intensity. Please note that the kv-cache here is shared in the depth and token direction (unlike batching which increases kv cache size). Recently, emerging computational paradigms such as in-memory computing eliminate the necessity of transferring model weights for every iteration, which might favor models involving self-iterations and be a perfect match for implicit SSM. > Will the code be published? Yes, the code and all experimental configurations will be published soon! > How was $\lambda$ set for the Phantom gradients (see equation 5)? […] What happens if this hyperparameter is varied? We’d like to refer to Sec 2.3, where we state that $\lambda$ “helps maintaining a small condition number at the cost of increased fixed-point iterations. We experimented with $\lambda = 0.5$ and $\lambda = 0.8$ and observed no notable differences in task performance. It might be of practical interest as well that in addition to 4 phantom gradient steps, we experimented with up to 8 steps with a 130M language model. More PG steps proportionally increase the memory footprint, but did not proportionally improve the performance. > What prior works regarding "path independence" are you referring to? Could you emphasize this point (including citations) more in the main text? Note that the gradient of an implicit model by the IFT is independent of the fixed-point search trajectory. A study that shaped our intuition for path independence was https://arxiv.org/abs/2211.09961 . We view the “simultaneous/sequential duality” presented in Sec. 5 and Fig. 2 as an important contribution of our work. Therefore, we are happy to provide more context and emphasis in the revised manuscript. > I am raising my score to a 4. I really hope this paper gets in. We deeply appreciate the constructive feedback and the reviewer’s willingness to engage in a candid discussion of the method’s practical relevance. We’re glad to hear the raised score and will do our best to make the final version as clear and useful as possible.
Summary: This paper describes an implicit approach to training state-space models with arbitrary depth by having the models evaluated in a fixed point and implicitly differentiating using the implicit function theorem, like DQEs. They find that on certain tasks, implicit SSMs outperform SSMs, which are unable to learn these tasks. Claims And Evidence: This paper claims “Notably, our implicit models outperform their explicit counterparts on standard benchmarks”, which is supported by Table 1, and can in fact model stateful systems, which is supported by Figure 1. Methods And Evaluation Criteria: Several benchmarks are used for evaluation, which all seem reasonably well-suited to the task. Theoretical Claims: Theorem 1 claims that the transition function in equation (8) is non-linear and non-diagonal. Appendix B contains the proof. I am somewhat unfamiliar with this style of proof, so perhaps this is justified by an “almost always” that is implicit, or I am completely missing something. However, several times the compositionality of non-linear functions, non-diagonal matrices, etc is used, which does not hold in general. For example, the line following equation 12 suggests that because (partial phi)/(partial h) = - (I - (partial f)/(partial z))^{-1} (partial f) / (partial h), it must be the case that f being nonlinear implies phi is nonlinear. However (again, I could be completely wrong), this is not guaranteed. For example, the equation f(h, z) = e^h e^z + z would have (partial phi)/(partial h) = - (1 - (e^h e^z + 1))^{-1} e^h e^z = - (e^h e^z)^{-1} e^h e^z = -1, so phi would be linear despite f being nonlinear. Obviously, this example is contrived specifically to be a counterexample, and the property holds in almost all cases, but unless there is something I am missing, it seems like this should be noted in the theorem statement. Similar “two nonlinear functions exactly cancelling each other out”-style counterexamples could surely be found for the claim on line 785 and a similar style of argument could be made to find a counterexample to the claim on line 796 that this Jacobian is necessarily non-diagonal. Experimental Designs Or Analyses: The experimental designs and analyses seem appropriate. Supplementary Material: I carefully read the proof of Theorem 1 in Appendix B. Relation To Broader Scientific Literature: I am not familiar enough with the existing literature to fill this section. Essential References Not Discussed: I am not familiar enough with the existing literature to fill this section. Other Strengths And Weaknesses: All points are addressed elsewhere. Other Comments Or Suggestions: Minor nits: Equation (8) should probably be paired with an additional equation z^* = f_theta(z^*, h_t^*, x_t) in order to make the fixed-point nature of the definition clear. Additionally, in the text immediately following this equation, “The fixed point z^*_t depends on h^*_t, and hence by equation (7) on h^*_{t-1}” I believe you intended to write “by equation (6)” because that is where the dependency with h_{t-1} is established. 216 right column: A5 should be A_5. Questions For Authors: All points are addressed elsewhere. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # General comment We would like to thank the reviewer for their comments. We are pleased that the reviewer finds our methods, evaluation criteria, experimental design, and analysis satisfactory. We appreciate the suggested clarifications and would like to take this opportunity to elaborate further on our theoretical results. We rigorously derive the Jacobian of the hidden state-to-state in Eq. (14) by taking the derivative of the fixed point condition Eq. (8). Incorporating the reviewers feedback, we will add the corresponding fixed point condition for $z^*$ as well along Eq. (8) as $$z_t^* = f_\theta\left(z_t^*, h_{t-1}^*, x_t\right)$$ If $M = \frac{\partial g_\theta}{\partial z}\vert_{z_t^*, h_{t-1}^*}$ is non-singular, we can further apply the implicit function theorem to replace $\frac{\partial \varphi}{\partial h}$ in Eq. (14) with Eq. (12). While there might exist $\theta$ such that $M$ is singular, we numerically verified that $M$ is non-singular for randomly initialized networks, and we did not observe singular $M$ during our training experiments. Eq. (14) contains products of matrices. As the reviewer points out, there is no guarantee that these products will not cancel out all non-diagonal terms, which could effectively lead to a diagonal Jacobian. As an example, the probability of two random Gaussian matrices multiplying to a diagonal matrix is zero. For the case of Eq. (14), we did not provide a rigorous proof that the probability is zero. Yet, we numerically checked that the Jacobian is non-diagonal both using autograd as well as Eq. (14). To rule out any concerns about the rigor of our theoretical contribution, we suggest revising Theorem 1 to mention only what we can rigorously prove: Theorem 1: The Jacobian of the implicit SSM is given by Eq. (14). If $$ \frac{\partial g_\theta}{\partial z}\vert_{z_t^*, h_{t-1}^*}$$ is non-singular, then $\frac{\partial\varphi}{\partial h}$ is given by (Eq. (12)). Remark 2: In contrast to the explicit state-space model in Eq. (1) and (2) the implicit state-space model allows for non-linear and non-diagonal state-to-state transitions. We empirically observe that $\varphi$ is non-linear and that the Jacobian is non-diagonal.
Summary: 1. The authors propose a DEQ-ified version (referred to as implicit models) of state space models like Mamba2. 2. This is motivated by the fact that the diagonal (and real) state transition matrix of these models is not expressive enough for state tracking. They show an implicit model has as a non-linear and a non-diagonal state transition matrix which is required for state tracking. 3. They test implicit Mamba2 model on the CatbAbi synthetic benchmark and on language modeling on D-PILE dataset. Claims And Evidence: For theoretical claims and issues, please see "Theoretical Claims* For methodological claims and issues, please see "Methods And Evaluation Criteria" Methods And Evaluation Criteria: Overall, **I think this is a good paper**, however, the only methodological/evaluation concern I have is on the **wall-clock time of this method**. Please correct me if I am wrong, but I expect that this method's (4+1) version takes atleast 4x the number of FLOPs in the forward pass than vanilla Mamba2 and that it's (32+4) version would take 32x the number of FLOPs than vanilla Mamba2. To probe this a little more, I suggest the following experiments: 1. Can the authors provide a wall-clock time analysis for their method against vanilla Mamba-2 2. I believe the the authors have currently controlled for number of parameters; could they also do a language modeling experiment with a control on the FLOPs. I am curious if the model's superior performance (on sizes > 350M) is actually due to the increased computation rather than the ability of the model to do state tracking. Furthermore, I think the wall-clock time becomes even more relevant as recent works like [1] have shown that Mamba2 can do state tracking if the transition matrix has complex eigenvalues. I think associative scan implemented for Mamba-1 already supports diagonal matrices with complex eigenvalues and it might be a more practically efficient solution to this problem. **Could the authors comment and contrast their method with this solution?** *NOTE: I would not hold this fact against this paper since [1] is a recent work.* [1]: Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues. Riccardo Grazzi, Julien Siems, Jörg K.H. Franke, Arber Zela, Frank Hutter, Massimiliano Pontil Theoretical Claims: In light of my comment on [1] in "Methods And Evaluation Criteria", I am curious if the authors can compare/comment on the difference in expressivity of their method and an SSM with a complex valued diagonal transition matrix. Is it possible to get a characterization of the class of transition matrices that the model admits? [1]: Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues. Riccardo Grazzi, Julien Siems, Jörg K.H. Franke, Arber Zela, Frank Hutter, Massimiliano Pontil Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria" Supplementary Material: Did not review the supplementary material as it mostly contains some background material, proof of theorem 1, Relation To Broader Scientific Literature: Tries to fix the problem that SSMs cannot do state tracking which is known in the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Running the model on more state tracking tasks like Parity and Arithmetic Mod (w or w/o brackets) might be help strengthen the paper. [1]: Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues. Riccardo Grazzi, Julien Siems, Jörg K.H. Franke, Arber Zela, Frank Hutter, Massimiliano Pontil Questions For Authors: Please see "Theoretical Claims" and "Methods And Evaluation Criteria" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive perception. Below we try to answer to the reviewer’s remaining questions. __Q1: Can authors provide a wall-clock time analysis for their method against vanilla Mamba-2__ Thanks. We will add the following to the paper comparing throughput and wall clock time (WCT) relative to the explicit model for the largest models (760M and 1.3B). ### Training Token Throughput and Wall Clock Time |Model|760M Tok/s|760M Rel T-put|760M Rel Time|1.3B Tok/s|1.3B Rel T-put|1.3B Rel Time| |-|-|-|-|-|-|-| |Mamba2*|1872|-|-|588|-|-| |Mamba2(4+1)|914|49%|205%|309|53%|191%| |Mamba2(24+4)|209|11%|180%|71|12%|166%| |ImpMambaAvg|546|29%|343%|184|31%|319%| |Llama†|775|-|-|582|-|-| |Llama(4+1)|472|61%|164%|237|41%|196%| |Llama(32+4)|47|6%|166%|50|9%|234%| |ImpLlamaAvg|131|16.8%|297%|149|26%|391%| The averaged numbers for the implicit models take the curriculum into account. ## Inference WCT Measurements (Time per token in milliseconds, averaged over 2048 tokens generated) |Steps|Llama 130M|Llama 1.3B|Mamba 130M|Mamba 1.3B| |-|-|-|-|-| |expl|46.7|97.9|23.5|45.7| |1|62.6|120.7|29.2|55.6| |2|130.8|236.7|53.5|102.9| |4|200.3|421.6|93.6|182.0| |8|440.3|745.2|180.2|3612.0| |16|748.8|1294.0|356.1|705.0| |32|1356.3|3204.8|710.5|1414.7| ## Memory Usage in Inference [MB] (Implicit / Explicit) |Model|Llama 130M|Llama 1.3B|Mamba 130M|Mamba 1.3B| |-|-|-|-|-| |Implicit / Explicit|871 / 511|10216 / 5281|935 / 547|10592 / 5488| ## Word Problem Inference We evaluate 4k samples at batch size of 512. For the single layer implicit Mamba2 - setting an upper limit of 4 self-iterations takes 26s and gets 97.8% accuracy on the p=0.5 distribution - setting an upper limit of 16 self-iterations takes 35s and gets 99.8%. Note that iterations terminate after 6 steps on average due to convergence. The explicit model with 16 layers takes 37s and gets 1.5%. This explicit model has the same dimensions per layer, and hence 16x the number of parameters compared to the implicit model. ## Catbabi WCT Training |Model|GPU-Hrs|Rel T-put|Rel Time| |-|-|-|-| |expl-1-lyr|1.02|-|-| |impl-1-lyr|1.83|55.74%|179.41%| |expl-2-lyr|1.81|-|-| |impl-2-lyr|2.96|61.15%|163.54%| |expl-3-lyr|2.58|-|-| |impl-3-lyr|4.51|57.21%|174.81%| ## Catbabi WCT Inference (Time per token in milliseconds, averaged over 50 tokens generated) |Model|Step-1|Step-2|Step-4|Step-8|Step-16|Step-32| |-|-|-|-|-|-|-| |expl-1-lyr|1.879|||||| |impl-1-lyr|2.966|4.8321|8.6203|16.0183|30.9154|60.5825| |expl-2-lyr|3.1831|||||| |impl-2-lyr|4.7185|8.2655|14.8508|27.6253|51.3367|105.8571| |expl-3-lyr|4.4807|||||| |impl-3-lyr|6.3261|11.0607|20.2967|38.7703|75.5918|149.1918| __Q2: Could authors do a language modelling experiment with a control on the FLOPs.__ Our primary goal is not FLOPs efficiency, but rather exploring a fundamental trade-off: how much true recursion is necessary for language modeling and reasoning, balancing parallelizability and expressiveness. While a single iteration of our implicit model approximately matches the explicit model in FLOPs, our intention isn't to claim FLOPs optimality. Instead, we aim to investigate qualitative differences, accepting increased representational power at the expense of greater and dynamic depth. Below, we provide a test-time compute table which allows for a FLOPs-matched comparison and highlights the flexibility of implicit models to trade off compute vs performance at test time. | Task | Model | f:1 | f:2 | f:4 | f:8 | f:16 | fixpt | |---------|----------------------|------|------|------|------|------|-------| | Avg Acc over tasks in Table1 | Implicit Mamba 1.3B|0.31|0.38|0.52|0.56|0.56|0.56| | Avg Acc over tasks in Table1 | Implicit Llama 1.3B|0.30|0.30|0.42|0.57|0.59|0.59| __Q3: Could the authors comment and contrast their method against SSM with complex diagonal values/negative eigen values [1]__? We discuss the paper that the Reviewer is referencing in the Related Work section. It is important to see that even with negative eigenvalues/complex diagonal values, the models in that reference are not able to solve the full S5 problem but only S5 restricted to transitions of (two-element) swaps (see Fig. 4 in Grazzi et al.). __Q4: Is it possible to get a characterization of the class of transition matrices that implicit model admits__? While a full characterization of the transition matrices goes beyond our study, Eq. (14) provides first insights. $\frac{\partial \varphi}{\partial h_{t-1}^*}$, the derivative of the fixed point w.r.t. hidden state in Eq. (12), is a source of general non-diagonal entries that depends on the implicit function $\varphi$. The hidden state is propagated through fully connected layers in the forward pass during the self-iteration. This leads to non-linear and non-diagonal contributions comparable with RNNs or multi-layer feed-forward networks.
Summary: This paper proposes implicit language models, which are RNNs defined implicitly via fixed-point iterations. Theoretically, the authors show that implicit models can represent non-linear and non-diagonal state transitions of RNNs, overcoming the limitations of transformers and state-space models (SSMs) which are restricted to simpler, linear transitions. Empirical results show that implicit models can solve a challenging state-tracking problem that transformers and SSMs fail to learn. In addition, the authors scale implicit models up to 1.3B parameters and show they outperform explicit counterparts on language modeling benchmarks, with favorable length generalization and auto-regressive generation capabilities. Claims And Evidence: The study provides sufficient evidence to support its main theoretical and empirical claims: - The claim that implicit state-space models can represent non-linear and non-diagonal state transitions is proven theoretically in Sec 3.1 and Appendix B. - The authors reproduce the finding from prior work regarding the inability of transformers or SSMs to solve hard state tracking problems such as S5 and show that implicit SSMs can behave like RNNs, in Sec 4 and Fig 1. - The scaling of implicit models to large language modeling tasks up to 1.3B parameters is supported by the results presented in Section 5 and the detailed experimental setup in Appendix D.3 (9 tasks). - The claims regarding length extrapolation capabilities and duality between sequential and simultaneous modes are supported in Section 5, and Figs 4 and 2. Methods And Evaluation Criteria: - The proposed method is well motivated and shown theoretically to address the previous models' inability to represent non-linear state interactions. The expectation is that overcoming this limitation it will lead to more expressive modeling and state tracking. - For evaluation, the authors focus on synthetic and real-world state tracking problems to systematically evaluate their theoretical findings. This is suitable experiment choice as it grounds the theoretical findings to empirical evidence, making the findings more trustworthy. - In addition, the evaluate performance on downstream performance carrying out experiments with language models of increasing size up to 1.3B. These results show whether the expressivity is actually useful on language modeling and downstream real-world tasks. Theoretical Claims: The main theoretical claim of the paper is captured in Theorem 1 and shows that the transition function defined by the implicit SSM is non-linear and non-diagonal. I checked the proof and found it to be logically correct; it applies the implicit function theorem to the function $g(z, h, x, \theta) = z - f_\theta(z, h, x)$ and then shows that the derivative of the implicit function $\phi(h,x, \theta)$ with respect to $h$ is a non-linear function if $f$ is non-linear. Based on this then the state-to-state Jacobian is shown to be non-linear. Experimental Designs Or Analyses: Yes, I reviewed the experimental designs and analyses presented in the paper and found them to be sound and well-executed in general. Below I list a few non-major issues: 1. State tracking experiments - It would be useful to provide more details on how exactly the synthetic data distributions were created exactly and what is the intuition behind the chosen parameters. Providing some examples would also help. - In addition, a few experimental details are missing on the hyper-parameters of Mamba2 model on both synthetic and CATBABI tasks (number of layer, learning rates, batch sizes, etc). 2. Language modeling experiments - The scaling experiments would provide more confidence for practical impact if the mode size was up to at least 7B models; it is not guaranteed that the behavior observed below 1.3B will generalize and whether non-linear transitions are still useful. - It would help if experiments included more recent benchmarks for large language models such as MMLU, BBH, HELM. - Provide more details on the training budget used for training the implicit models and state-space models and quantify the training + inference costs. It was not clear to me how the authors ensure equal budget for convergence and what is the exact computational benefit for implicit models. Supplementary Material: I reviewed the following sections: B) proof of theorem, C) additional results and D) experimental details. Relation To Broader Scientific Literature: The contributions are generally well-situated within the broader scientific literature: - The paper builds upon previous theoretical work that has identified limitations of state-space models in capturing complex sequential states and recognizing certain formal languages (Merill et al. 2024, Sarrof et al. 2024). They proposed models aim to address these exact limitations. - The authors develop implicit models building on top of deep equilibrium models and implicit function theorem from prior work. The adaptive computation that is inherited from these models is a useful property that has shown to be useful in previous research (Graves 2017, Dehghani et al. 2019). - Provides additional evidence to the existing literature that looped models are able to generalize better to input lengths not seen during training (Yang et al. 2024a). It would be useful to discuss the advantages of implicit models compared to recurrent-attention-based transformers or hybrid transformers that make use of full and recurrent attention mechanisms that are competitive in terms of quality and speed tradeoff. Essential References Not Discussed: There are prior works that used simple and more advanced recurrent attention mechanisms based on the kernel-based view of attention for transformers (Tay et al. 2020). With such formulations of attention the transformers are converted into RNNs which do not have the problems as pointed out in this paper. It would be essential to discuss the unique advantages of implicit models beyond implementing non-linear state-transitions of RNNs which has been addressed in the past. Other Strengths And Weaknesses: Strengths: - The idea of using fixed-point iterations to combine the expressive power of RNNs and the parallelization benefits of transformers is quite interesting and useful, since the training becomes more efficient due to the computation of gradients with a constant memory footprint. - Provides a convincing theoretical analysis of implicit models and makes a solid connection between the benefits of RNNs and Transformers. - Experimentation is thorough and shows promising results up to 1.3B parameter models. The evaluation covers both tasks that require state tracking and tasks used in large language modeling studies. Weaknesses: - There is lack of discussion and comparison to hybrid models that leverage the benefits of RNNs and Transformer models through a combination of recurrent and softmax attention mechanisms. - The family of implicit models provides an appealing solution for the lack of non-linear state transitions, however, the paper fails to motivate why that is useful in practical real-world tasks where instruction-following models based on Transformers perform exceptionally well. - The scaling results up to 1.3B parameters do not provide concluding evidence that the state expressivity is actually needed for good performance on downstream tasks. In addition, the performance improvements also are not very consistent across different model sizes which further casts doubt on the generalizability to larger model sizes. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you elaborate on how the implicit model are trained exactly? A more detailed explanation of the training details (e.g. masking, next-token prediction, etc) would help the reader better understand how this approach is implemented and what is the training cost involved. 2. Regarding the curriculum-based approach and the duration of different phases, what are the key considerations for the different design choices for different datasets? It would be good to specify how sensitive is the final performance on these choices. 3. Previous studies have devised hybrid architectures that leverage the benefits of both RNNs and Transformers. Are the expressivity issues applicable to them? How do implicit models compare to them? 4. What are the unique contributions of this work in the area of adaptive computation? Reading through the related work, it is not very clear what is the contribution of this work. 5. The implicit models are shown to have good properties but there is little emphasis on their limitations. I wonder if the authors could provide some additional discussion and analysis on which tasks or settings the implicit models have difficulties or underperform other models. Also, do the authors expect their results to generalize to even larger model sizes? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and positive assessment of our submission. Below we address raised concerns and assumptions: **Experimental details**: we will make sure to double check Appendix D to see if any detail is missing. Please see additional wall clock time and memory footprint in response to reviewer __AqCH__. We are also releasing code with precise experiment configurations. **Essential references not discussed:** The Reviewer appears to make the key assumption that some or all of the models with linear attention or kernelized attention as discussed in (Tay et al. 2020), or models with gated linear attention and variations of gated state-space models such as Mamba (Gu & Dao 2024) already address the theoretical issues discussed in our manuscript. We would like to respectfully clarify this assumption. Yes, these models admit a recurrent inference mode. [1] discusses the structural similarities between these attention variants. However, they all face the same limitations discussed in (Merill et al. 2024, Sarrof et al. 2024) since their recurrence is not expressive enough. We discuss xLSTM as an exception, whose sLSTM module is a non-linear + non-diagonal RNN, which lacks parallelization, though. If we should be missing relevant related work, we are happy to take further suggestions. [1] https://arxiv.org/abs/2405.15731 **Practical usefulness:** Our primary goal is to highlight qualitative differences between implicit and explicit sequence models, without claiming universal superiority. Our experiments demonstrate that implicit models scale practically, as further confirmed by concurrent work (https://arxiv.org/pdf/2502.05171). Thus, implicit models are a viable choice when complex sequential processing (e.g., state tracking) is required. We offer a design point where you can get a qualitative jump in expressiveness at the expense of compute, a trade-off reminiscent of the test-time compute paradigm. > [...] however, the paper fails to motivate why that is useful. Static depth models show exceptional performance. Yet, they even struggle with certain regular languages. This limits their capabilities to execute tasks that require state tracking (e.g. managing supply chain processes). We tested GPT 4o and o1 (API) on the S5 state-tracking task. |Sample Length|o1-mini Acc|GPT-4 Acc| |-|-|-| |5|0.967|0.200| |15|0.967|0.100| |32|0.100|0.133| > 1.3B parameters do not provide concluding evidence that the state expressivity is actually needed for good performance We agree that most downstream tasks do not require extensive state tracking, and we do not claim that it is required in all tasks. We do see increased performance in HellaSwag, a benchmark requiring limited state-tracking. # Questions 1.Implicit models for language modeling and CatbAbI used next-token prediction loss with phantom gradients where backpropagating through fixed self-iterations (4 for language modeling, 6 for CatbAbI) after gradient-free searches (up to 24/32 iterations language modeling, 50 CatbAbI). 2.The LM (The Pile) curriculum balances cost and accuracy by applying more self-iterations to only n=20% of tokens (Table 1). To address Reviewer questions, we tested n=10% on the 1.3B Mamba2 (24+4) model, showing curriculum robustness. |Metric|10%|20%| |-|-|-| |Tokens seen|207|207| |LAMBADA |0.4186|0.4116| |HellaSwag|0.3672|0.3527| |PIQA|0.6502|0.6572| |Arc-E|0.4596|0.4815| |Arc-C|0.2372|0.2372| |Wino|0.5170|0.5130| |OpenQA|0.2900|0.3000| |Average|0.4200|0.4219| 3.See initial statement 4.Implicit models can implement non-diagonal + non-linear transitions. We show in Fig. 2 (and Sec. 5) that implicit models allow to conduct sequential inference, e.g. language generation, only carrying forward the converged hidden state (SSM) or KV-cache (Transformer). This allows memory allocation independent of the number of iterations. While RNN-based ACT (Graves, 2017) make the non-parallizability of RNNs even worse by sequentially iterating more steps per token, implicit models learn the adaptive budget for all tokens in parallel, which allows us to scale to 1.3B models. 5.We will further discuss limitations and computational differences between implicit and explicit models. Larger models suffer less from state tracking (Fig. 1), but even the largest GPT models remain limited. # New experiments > "Benchmarks" We were able to evaluate the models on an additional task, the MMLU. Please note that we selected our benchmarks to align with the Mamba2 and xLSTM papers. |Model|MMLU Accuracy| |-|-| |1.3B_implicit_mamba2 (24+4)|0.269| |1.3B_implicit_llama (32+4)|0.258| |1.3B_llama_baseline|0.2483| |1.3B_mamba_baseline|0.2502| **Computational benefits:** Note that implicit models have qualitatively higher expressivity than explicit models as shown in our synthetic experiments. In addition, we show results depending on the compute budget in our response to __AqCH__.
null
null
null
null
null
null
VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters
Accept (poster)
Summary: This paper propose the VisionTS, which uses the strong pre-trained ability in the vision model to help the time series modality. The core idea is the inherent similarity between the image and the time series such as trend, seasonality, and so on. To align the input of time series into image, the author first convert the time series into a 2D gray scale image followed by the MAE for prediction. The experiments are solid and the performance of the proposed visionTS is noteworthy. Claims And Evidence: The main claims of this paper is that the image and time series share similar properties. The author explains this points by an intuitive explanation. However, it is not very convincing to me since the domain gap between these two modalities. Further empirical evidence may be needed to justify why the pre-trained MAE can be used even in a zero-shot way to perform time series forecasting. Methods And Evaluation Criteria: - The proposed methods mainly focus on the transformation of how to align the input space of image and time series. The solution of transforming to 2D gray scale image is reasonable. - The evaluation metrics are commonly used in existing works, which makes sense. Theoretical Claims: No theoretical proof included in the paper. Experimental Designs Or Analyses: - The experiments are solid. The authors have included various time series tasks, including zero-shot and few-shot, and have compared with both LLM-based and classic time series models. - I have noticed that the authors have discussed VisionTS can not model the interaction between multivariate time series data. Since the multivariate data is almost ubiquitous in real-world, this inability may weaken the practical usage. However, since this paper is a first attempt, this problem may be addressed in the future. Supplementary Material: I have reviewed the Suppl. Relation To Broader Scientific Literature: The idea of using vision model for time series is novel, which can inspire further explorations. Essential References Not Discussed: The experimental comparison lacks one highly related work CALF [CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning, AAAI2025], which is the existing SoTA LLMs-based time series forecasting works. How is the performance of the proposed VisionTS compared with CALF? Other Strengths And Weaknesses: None Other Comments Or Suggestions: - The LLM have been recently proved not very necessary for time series [Are Language Models Actually Useful for Time Series Forecasting? NIPS2024]. So I wonder whether the vision model also suffers the similar situation? Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your encouraging response. **We are delighted that you find our paper novel, with solid experiments and the performance is noteworthy.** Below are our responses: > Claims And Evidence: However, it is not very convincing to me since the domain gap between these two modalities. Further empirical evidence may be needed to justify why the pre-trained MAE can be used even in a zero-shot way to perform time series forecasting. - As you noted, we are the first to leverage a pre-trained vision model for zero-shot forecasting. **We understand that ground-breaking ideas often require time to gain community endorsement.** As an initial exploration, we've conducted extensive experiments to validate VisionTS's effectiveness. To our knowledge, our evaluation benchmark is the largest among existing TSF foundation models. - To explore the domain gap between the modalities further, we visualize their similarities in Fig 7. We find notable heterogeneity within time-series data across domains, with images potentially "bridging" these isolated time-series representations, which might explain why VisionTS outperforms some cross-domain TSF models. - We welcome any suggestions that could further strengthen the persuasiveness! > Essential References Not Discussed: The experimental comparison lacks one highly related work CALF [CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning, AAAI2025], which is the existing SoTA LLMs-based time series forecasting works. How is the performance of the proposed VisionTS compared with CALF? - We compare the full-shot results reported by CALF with VisionTS's zero-shot results below: | | VisionTS (zero-shot) | CALF (full-shot) | | ---------------- | -------------------- | ---------------- | | ETTh1, MSE | **0.390** | 0.432 | | ETTh1, MAE | **0.414** | 0.428 | | ETTh2, MSE | **0.333** | 0.349 | | ETTh2, MAE | **0.375** | 0.382 | | ETTm1, MSE | **0.374** | 0.395 | | ETTm1, MAE | **0.372** | 0.390 | | ETTm2, MSE | 0.282 | **0.281** | | ETTm2, MAE | **0.321** | **0.321** | | Electricity, MSE | 0.207 | **0.175** | | Electricity, MAE | 0.294 | **0.265** | | Weather, MSE | 0.269 | **0.250** | | Weather, MAE | 0.292 | **0.274** | - We can observe that VisionTS, even in a zero-shot scenario, proves comparable to CALF in full-shot settings. It highlights the greater transferability of visual modality to time series over textual modality. > Other Comments: The LLM have been recently proved not very necessary for time series [Are Language Models Actually Useful for Time Series Forecasting? NIPS2024]. So I wonder whether the vision model also suffers the similar situation? - We agree that text modality may offer limited benefit to time series. Following the paper's ablation study, we removed or replaced VisionTS's visual backbone with simpler modules. Appendix D.3 shows these changes degrade performance, indicating visual knowledge is indeed beneficial for TSF, unlike textual knowledge. **We hope these responses can fully address your concerns. Thank you once more for your detailed feedback!** --- Rebuttal Comment 1.1: Comment: Thanks for providing this rebuttal. I have carefully read this response. My major concerns have been addressed. However, I still have the following suggestions for the authors to further improve their paper. 1. Moving the justification of "why vision model can do TS tasks" in the front of the paper, since this justify the motivation of this work. 2. From the comparison with CALF, it seems the proposed method fails behind on the Electricity and Weather dataset. It is suggest ed to add related discussion in the revision. In short, despite the limitation of real-world multivariate data, and comparison with existing SoTA, given this work is the first exploration of applying vision model to TS, I tend to maintain my previous positive rating. --- Reply to Comment 1.1.1: Comment: Thanks for your quick response and positive feedback on our paper! > Moving the justification of "why vision model can do TS tasks" in the front of the paper, since this justify the motivation of this work. Thank you for your suggestion. In the front of the paper (Introduction section), we have included an illustrative example (Figure 2) and referred to the modality visualization experiment (Lines 106-118) to support our motivation. We will emphasize this further in the final version, possibly by bolding the relevant texts. > From the comparison with CALF, it seems the proposed method fails behind on the Electricity and Weather dataset. It is suggest ed to add related discussion in the revision. Thank you for pointing this out. However, we respectfully clarify that VisionTS operates in **zero-shot mode** in this comparison (i.e., *without training* on the Weather and Electricity datasets). Full-shot results for VisionTS are reported in Table 19 (Appendix D.2) and summarized as follows: ||VisionTS (*full-shot*)|CALF (*full-shot*)| |-|-|-| | Electricity, MSE | **0.165** | 0.175 | | Electricity, MAE | **0.259** | 0.265 | | Weather, MSE | **0.227** | 0.250 | | Weather, MAE | **0.262** | 0.274 | Notably, these results were achieved by fine-tuning VisionTS for just one epoch (only fine-tuning layer normalization while freezing other modules). This demonstrates that VisionTS, with minimal fine-tuning, is able to outperform CALF in the full-shot mode. If you have any other questions, feel free to reach out for further discussion!
Summary: The paper proposes to utilize a pre-trained vision masked autoencoder for time series forecasting. The time series data is processed channel-independent and stacked depending on the periodicity of the series. A pre-trained vision-MAE is applied, and the result is transformed back in the series space representing the forecast. The authors argue that the intrinsic pattern of vision data is more similar to time series data than text, hence, pre-trained vision models might be beneficial for TSF foundation models while LLM are not. The approach was evaluated on an extensive set of benchmarks and shows promising results. ## update after rebuttal we thank the authors for the clarifications, I updated my score already Claims And Evidence: The main claim is that Vision-TS, a pre-trained vision autoencoder with time series specific pre- and postprocessing, is suitable for time series forecasting. The claim is supported by an extensive set of benchmark evaluations. While I think the claim is justified, I have some concerns about the evaluation results (see below). Methods And Evaluation Criteria: Yes, the paper evaluates on multiple standard benchmark datasets for time series forecasting. The method itself is a creative approach that might not constitute a model that will be actually used in practice but might help to understand why and how TSF foundation models work. Theoretical Claims: There are no theoretical proofs or claims. Experimental Designs Or Analyses: In general, the experimental design is well done as the paper evaluates multiple benchmarks (individual datasets of the three benchmarks overlap, which is fine). I only have one concern and one suggestion: 1) Gift-Eval is the most comprehensive benchmark of the utilized benchmarks. The results are unfortunately discussed only very briefly in Figure 7. Hence, only the MASE metrics, but no WQL or average rank metrics are reported. Although the leaderboard is even linked in the paper, some models outperforming VisionTS (ttm, chronos bolt) are not reported. Although some might be concurrent work, updating the results would be beneficial. 2) Following the argument from point (1), gift-eval and monash reposoitry are likely the much more relevant benchmarks as they are more comprehensive as the long-term benchmark. Recent work further highlights problems of the respective long-term benchmark [1,2]. Therefore, I would suggest emphasizing and discussing the gift-eval and monash in more depth instead of mostly highlighting the long-term benchmarks. [1] L. Brigato, R. Morand, K. Strømmen, M. Panagiotou, M. Schmidt, and S. Mougiakakou, ‘Position: There are no Champions in Long-Term Time Series Forecasting’, Feb. 19, 2025, arXiv: arXiv:2502.14045. doi: 10.48550/arXiv.2502.14045. [2] Invited Talk by Christoph Bergmeir - Fundamental limitations of foundational forecasting models: The need for multimodality and rigorous evaluation, Time Series in the Age of Large Models Workshop NeurIPS 2024 Supplementary Material: I did not check the supplementary code. Relation To Broader Scientific Literature: The work is located in the field of pre-trained time series models / foundational time series models. Most closely related is the work that builds upon existing language models, as VisionTS also utilized a pre-trained model that is actually pre-trained on non time series data. To the best of my knowledge, VisionTS is the first model that utilizes a pre-trained vision model. Essential References Not Discussed: Some methods that appear in the evaluation (e.g. TTM and Chronos) are not cited. Other Strengths And Weaknesses: Strength: - Creative approach that can help the understanding of what drives the performance of TSF foundation models. - Extensive benchmark scope Weakness: - Evaluation reporting might miss certain metrics and models (see Experimental Design & Analysis). I suggest the author should update the mentioned results. I especially want to note that regardless of weather, VisionTS is still the best-performing model; this would improve the paper. The novelty of the idea (which helps to further understand TSF foundation models) with a thorough and sound evaluation is more important than outperforming SOTA. If the evaluation results are updated accordingly, I would consider increasing my score towards acceptance. - Approach seems to not to scale with bigger models (see results on different MAE size) Other Comments Or Suggestions: - Questions For Authors: - As far as I understand, VisionTS relies on a specific periodicity that is used for segmentation. How would you use VisionTS to model time series data that exhibits multiple periodicity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive comments on our work. We are pleased that you find our paper **novel and well-experimented, aiding in understanding the workings of TSF foundation models.** Here are our responses to your concerns: > E1: Gift-Eval is the most comprehensive benchmark of the utilized benchmarks. > E2: Recent work further highlights problems of the respective long-term benchmark. Therefore, I would suggest emphasizing and discussing the gift-eval and monash in more depth instead of mostly highlighting the long-term benchmarks. - We fully agree that LTSF has limitations. Since this ground-breaking viewpoint has recently emerged, before it is widely accepted by the research community, we believe it is still necessary to test widely-used benchmarks. As you can see, other reviewers are still interested in the LTSF datasets. - We also agree on the need to enhance presentation of solid benchmarks. **We will add the GIFT-Eval results to the right side of the teaser (Figure 1) to further emphasize them and expand the discussion of their experimental results.** Below is further discussion on the GIFT-Eval leaderboard which will be included in our final revision: > W1: Evaluation reporting might miss certain metrics and models (see Experimental Design & Analysis). I suggest the author should update the mentioned results. > For GIFT-EVAL, only the MASE metrics, but no WQL or average rank metrics are reported. - We report MASE but not CRPS (WQL) or average rank, since **these are probabilistic metrics, not point metrics** (note that the average rank is based on CRPS-sorted results). Due to limitations of the MAE model, VisionTS is not a probabilistic forecasting model, as mentioned in Section 6. Comparing the CRPS metric of point-forecasting models with probabilistic models is unfair, as the former's CRPS tends to be significantly worse. **If we only consider the point forecasting model (e.g., TTM r1/r2 and Timer) on the leaderboard, VisionTS significantly outperforms them in both MASE and CRPS.** > Some methods that appear in the evaluation (e.g. TTM and Chronos) are not cited. > Although the leaderboard is even linked in the paper, some models outperforming VisionTS (ttm, chronos bolt) are not reported. Although some might be concurrent work, updating the results would be beneficial. - Thank you for your reminder and suggestion. **We will cite all the referenced papers and update our results to include those concurrent works, even though some lack published papers, as noted by `Reviewer TiNp` and mentioned in Line 297**. We also highlight that TTM's "superior results over VisionTS" were achieved by fine-tuning on the GIFT-Eval training dataset, not as a zero-shot model. Its zero-shot capability is significantly weaker than VisionTS, as shown in the leaderboard and already referenced in Figure 4 of our paper. - We would also like to note that there may be data leakage issues for these concurrent works. For instance, both TimesFM 2.0 and Chronos-bolt used the M4 dataset, while TimesFM also utilized the Weather for pretraining. In contrast, visual MAE was trained on ImageNet, long before gift-eval, which can ensure no leakage. > W2: Approach seems to not to scale with bigger models (see results on different MAE size) - One explanation is that larger vision models tend to memorize image-related details, leading to overfitting and harmful to time series forecasting. Moirai (ICML 2024 oral) also exhibited similar behavior, where larger models perform worse on out-of-distribution data (See Table 6 in [1]), with **even more severe degradation than VisionTS**. This is understandable given the disparity between image and time series modalities. We believe future adaptations in time series domain could alleviate this issue. - Additionally, we found that larger models are not without merit. For example, MAE (large) performs well on ETTh1, and MAE (huge) shows good results on Electricity. Exploring the scenarios for different MAE sizes is a promising research direction. > Q1: How would you use VisionTS to model time series data that exhibits multiple periodicity? - We would assess potential periodicities based on sampling frequency and select the optimal P using the validation set. For time series without clear periodicity or complex multi-periodicity, we can try P=1, which can be effective in our experiments (Appendix C.5). **We hope these responses can fully address your concerns. Thank you once more for your detailed feedback, which greatly enhances the robustness of this paper.** [1] Unified Training of Universal Time Series Forecasting Transformers --- Rebuttal Comment 1.1: Comment: Thanks for providing this rebuttal and addressing my main concern. I updated my score accordingly. I want to note that regarding the LTSF benchmark discussion, it is important to highlight issues when they are present - while I understand your concern that reviewers are still interested in LTSF benchmark, a change can only happen when top conference papers lead the way. So, while I understand that fully omitting it is problematic, its still in realm of the authors to highlight what they think is most important and meaningful. --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt response and your appreciation! We believe that the time series research community may need some time to adapt to this potential change. If this paper is fortunate enough to be accepted, we will do our best to improve this situation. Best, Authors
Summary: In this paper the authors propose to adapt an image masked auto encoder pretrained on ImageNet for time series forecasting. They justify their choice by the similarities between the image and the time series modalities. They empirically show that the proposed method achieves superior performance compared with other state of the art baselines. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: No issue. Supplementary Material: No. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strength: - The paper presents thoughtful designs and evaluation of the transfer learning capability of visual models on time series forecasting. It's a relatively novel and intuitive approach. - The authors device and discuss a few key designs that are proper and exclusive for this visual approach. The empirical results are impressive. Weakness: - The paper lacks ablation studies to help understand the contributions from each of the new mechanisms the authors introduce as compared to a time series native pretrained model. See questions. Other Comments Or Suggestions: Some empirical results are slightly out dated or misinterpreted: - Since GIFT-eval is cited, consider explaining why the current leaderboard leaders are not mentioned (e.g., no corresponding publications). - Table 3 is misleading, as the speed up from VisionTS is due to the alignment and the reconstruction step, and it only shows up for >1k forecast length, possibly for batches that use a same P within. Moirai and TimesFM are not the best decoder only reference either, e.g., TimesFM does not implement cached decoding properly. Questions For Authors: My main questions are around the additional practices introduced in the paper, e.g., alignment and reconstruction, reshaping into 2D with the explicit hints of the periodicity, etc. These practices are relatively agnostic to the backbone foundation model, and should be inspected separately - if considered part of the VisionTS, they undermine the zero-shot claim of the proposed method, as these practices are very lookback window specific, and it's not surprising to see that proper handling of the time scale and the periodicity can good deliver models (e.g., [1]). I'd suggest the following ablation studies to help better understand the proposed method: 1. An empirical study when other methods are fed with the time series of some good alignment and the results are rescaled. 2. An empirical study when a unverisal or an improper P is fed to VisionTS. [1] SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters, https://arxiv.org/abs/2405.00946 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your invaluable review. We are delighted that you believe that our paper is novel and the experiment results are impressive. Below are our responses to the concerns: > W1: ablation studies for each of the new mechanisms. > Q1: An empirical study when other methods are fed with the time series of some good alignment and the results are rescaled. - If "other methods" refers to *zero-shot pretrained models*, we kindly note that the alignment is applicable only to visual models, as it is meant to convert 1D time series into 2D formats. To the best of our knowledge, no existing TSF foundation models accept direct 2D input, making this alignment step inapplicable to existing models. - If "other methods" refers to *models trained from scratch*, we conducted an ablation study using the same alignment but substituting the vision backbone with various models. **Table 20 (Appendix D.3) indicates these substitutions significantly hurt performance and fail to achieve zero-shot forecasting**, underscoring the vision backbone's crucial role in VisionTS. **The mentioned SparseTSF (or TimesNet using similar alignment) cannot achieve such zero-shot forecasting as well.** - Beyond the alignment mechanism, we also introduce a new mechanism: using smaller standard deviations ($r$) during normalization. Thanks for your suggestions and we would like to investigate its contribution. The following table summarizes Moirai's performance with different $r$ (average MSE across four ETT datasets), indicating that $r$ significantly higher or lower than 1 lead to notable degradation. The reason is that image and time series distributions differ; the former is limited by color range while the latter is not. Therefore, this mechanism is unnecessary for other TSF foundation models. |$r$|0.4|0.6|0.8|1.0|1.2|1.5| |-|-|-|-|-|-|-| ||0.474|0.494|0.387|0.372|0.370|0.380| > Q2: An empirical study when a universal or an improper P is fed to VisionTS. - Thank you for your suggestion. We have compared the results using P=1 on the ETT datasets in Table 7 (Appendix B.2). To further validate this, we tested P=1, 7, and 24 on the Monash dataset: |Proper P|P=1|P=7|P=24|LLMTime| |-|-|-|-|-| |0.729|0.931|0.957|1.102|1.041| Results indicate that selecting an appropriate P based on sampling frequency is crucial for zero-shot forecasting. > Q: if considered part of the VisionTS, they undermine the zero-shot claim of the proposed method, as these practices are very lookback window-specific. - We kindly note that **existing zero-shot foundation models also have their own mechanisms to incorporate sampling frequency based on the specific lookback window**. For example, Moirai selects an appropriate patch size based on the sampling frequency (see Appendix B.1 of [1]). TimesFM [2] even includes the sampling frequency as model input. We believe that leveraging prior data characteristics (e.g., sampling frequency and periodicity) to further enhance the performance of zero-shot models is also a promising research direction. > Comment 1: consider explaining why the current GIFT-Eval leaders are not mentioned (e.g., no corresponding publications). - Thank you for your suggestion. As you noted, current leaders are concurrent works, some without publications (mentioned in Line 297). **However, we plan to include them in the final paper version, as discussed with `Reviewer cUEq`.** We would also like to note that there may be data leakage issues for these concurrent works. For instance, both TimesFM2 and Chronos-bolt used the M4 dataset, while TimesFM also utilized the Weather for pretraining. In contrast, visual MAE was trained on ImageNet and ensures no leakage. > Comment 2: Table 3 is misleading, as the speed up from VisionTS is due to the alignment and the reconstruction step, and it only shows up for >1k forecast length, possibly for batches that use a same P. Moirai and TimesFM are not the best decoder only reference either, e.g., TimesFM does not implement cached decoding properly. - Thank you for your suggestion. For efficiency testing, our experimental settings are consistent with Moirai (refer to [1] Table 23), using longer forecast lengths. We choose Moirai and TimesFM for comparison since they are our baselines, and we will note in our final revision that VisionTS's evaluation uses a same P. To further address your concerns, we test shorter lengths, as shown in the table below. |Context Len|100|100|100|100|200|300|400| |-|-|-|-|-|-|-|-| |Pred Len|100|200|300|400|100|100|100| |Moirai (base)|0.03|0.03|0.04|0.04|0.04|0.04|0.04| |TimesFM|0.02|0.03|0.04|0.06|0.02|0.02|0.02| |VisionTS|0.04|0.03|0.03|0.03|0.04|0.05|0.05| The table shows that VisionTS's runtime is similar to these two models. **We hope these responses can fully address your concerns. Thank you again for your detailed feedback!** [1] Unified Training of Universal Time Series Forecasting Transformers [2] A decoder-only foundation model for time-series forecasting
Summary: In this paper, the authors explore a novel direction in applying foundation models to time series forecasting. Given the intrinsic similarities between natural images and time series, such as modality, origin, information density, and features, the authors introduce VisionTS, a TS forecasting model built upon the pretrained CV foundation model MAE. By leveraging segmentation, rendering, and alignment techniques, 1D time series data is transformed into 2D matrices, enabling the reconstruction and prediction of masked horizon sequences. Claims And Evidence: Yes. Extensive zero-shot and full-shot experiments have been conducted on the long-term TSF benchmark, the GIFT-Eval Leaderboard, and the Monash TSF Benchmark. Additionally, efficiency evaluations have been included. The results highlight the superior performance of the approach in cross-modality forecasting research. Methods And Evaluation Criteria: Yes. This method maintains the same experimental settings as the previous methods in time series forecasting. Theoretical Claims: This paper does not contain any theoretical discussions and claims. Experimental Designs Or Analyses: The experimental designs are reasonable and comprehensive. Supplementary Material: This paper has uploaded the code as supplementary material. Relation To Broader Scientific Literature: This paper focuses on foundation model for time series forecasting, which is basic research and could be used in widely scientific researches, such as energy, sale, and finance. Since it brings a new technique, I don’t find the specific relation between the proposed method and scientific literature in other research areas. Essential References Not Discussed: I think this paper have included all essential references in this research area. Other Strengths And Weaknesses: Strength: 1. The authors investigate foundation models for time series forecasting from a novel view, and provide well-founded motivations forleveragingpretrainedvisualmodelasnumericseriesforecaster. 2.The authors conduct extensive experimental evaluations under both zero-shot and full-shot settings, andachieve promising performance. 3. This paper is well-written, providing sufficient analysis and key insights of the methodology and experiments. Weakness: 1. In time series forecasting tasks, it is necessary and important to monitor the temporal order of time points or patches, such as PatchTST.How does VisionTS obtain the complete temporal information of visible patches during the alignment process? 2. In the zero-shot experiments in Tables 1 and 9, VisionTS underperforms compared to Moirai on half of the datasets (ETTm2, Electricity, and Weather). The authors should provide a detailed discussion and analysis of the underlying reasons. 3. Zero-shotcomparisononthelong-term TSF benchmark shown in Table 1 is suggested to include TTM [1] and Time-MoE[2] as baselines as well. [1] Ekambaram, Vijay, et al.Tiny time mixers (ttms): Fast pre-trained models for enhanced zero/few-shot forecasting of multivariate time series.NeurIPS, 2024. [2] Shi, Xiaoming, et al. Time-moe: Billion-scale time series foundation models with mixture of experts. ICLR, 2025. Other Comments Or Suggestions: Please refer to the weakness. Questions For Authors: Please refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your invaluable response. We are delighted that you found our paper novel, well-motivated, and with sufficient experiments and insights. Below are our responses to the concerns: > W1: How does VisionTS obtain the complete temporal information of visible patches during the alignment process? - During the alignment process, the temporal information is encoded one-to-one to the *spatial information* after transformation. Specifically, the patch at the $i$-th row and $j$-th column corresponds to the time step $j\times N+i$ (with the input window scaled to $N(N-n)$ time steps in total). During the ViT processing, each patch receives a unique 2D positional encoding, enabling the model to capture spatial information and, consequently, the corresponding temporal information. > W2: In the zero-shot experiments in Tables 1 and 9, VisionTS underperforms compared to Moirai on half of the datasets (ETTm2, Electricity, and Weather). The authors should provide a detailed discussion and analysis of the underlying reasons. - A possible explanation is that **Moirai's pre-training data significantly overlaps with the domains similar to Electricity and Weather (e.g., Moirai pretraining containing 60% energy and 15% climate data)**. This possibly leads to data leakage. In contrast, VisionTS, using an MAE model trained solely on ImageNet, is free from such potential leakage. - Additionally, this experiment is *not* to prove VisionTS is superior to Moirai; instead, we aim to show a purely visual model can achieve performance comparable to a native time series pretrained model. Maybe we can rephrase this statement as: *In the zero-shot experiments, Moirai (trained on time series) underperforms compared to VisionTS (trained on images) on half of the time series datasets*. This underscores the promising potential of vision models in TSF. > W3: Table 1 is suggested to include TTM [1] and Time-MoE [2] as baselines as well. Thank you for your suggestion. We report the base and large model results for Time-MoE since the ultra model weights are unreleased. For TTM, we used the official HuggingFace model for replication. The following table summarizes the performance of various zero-shot foundation models. | | VisionTS (base) | Time-MoE (base) | Time-MoE (large) | TTM (v1) | Moirai (base) | | ---------------- | --------------- | --------------- | ---------------- | --------- | ------------- | | ETTh1, MSE | **0.390** | 0.400 | 0.394 | 0.398 | 0.434 | | ETTh1, MAE | **0.414** | 0.424 | 0.419 | 0.421 | 0.439 | | ETTh2, MSE | **0.333** | 0.366 | 0.405 | 0.348 | 0.346 | | ETTh2, MAE | **0.375** | 0.404 | 0.415 | 0.393 | 0.382 | | ETTm1, MSE | **0.374** | 0.394 | 0.376 | 0.520 | 0.382 | | ETTm1, MAE | **0.372** | 0.415 | 0.405 | 0.479 | 0.388 | | ETTm2, MSE | 0.282 | 0.317 | 0.316 | 0.312 | **0.272** | | ETTm2, MAE | **0.321** | 0.365 | 0.361 | 0.348 | **0.321** | | Electricity, MSE | 0.207 | (data leakage) | (data leakage) | 0.201 | **0.188** | | Electricity, MAE | 0.294 | (data leakage) | (data leakage) | 0.293 | **0.274** | | Weather, MSE | 0.269 | 0.265 | 0.270 | **0.234** | 0.238 | | Weather, MAE | 0.292 | 0.297 | 0.300 | 0.266 | **0.261** | | avg, MSE | **0.309** | - | - | 0.335 | 0.310 | | avg, MAE | 0.345 | - | - | 0.367 | **0.344** | | 1st count | 7 | 0 | 0 | 1 | 5 | **We hope these responses can fully address your concerns. Thank you once more for your detailed feedback!**
null
null
null
null
null
null
TabPFN Unleashed: A Scalable and Effective Solution to Tabular Classification Problems
Accept (poster)
Summary: This paper introduces BETA, a method to improve the scalability and performance of TabPFN (a transformer-based technique for tabular classification). BETA combines (1) a lightweight encoder, fine‐tuned (with the pre-trained TabPFN frozen) to re-map raw features into a latent space and better align with downstream data distributions (to reduce bias), (2) multiple encoder paths using a Batch Ensemble strategy to introduce diversity and reduce variance, (3) bootstrapped sampling during inference to generate diverse support and prediction sets, which are aggregated (uniformly or with weights), (4) Error-Correcting Output Codes to extend the model to multi-class classification tasks (beyond 10 classes). Extensive experiments on benchmark datasets (e.g., TALENT, high-dimensional datasets) and ablation studies, as well as bias-variance analyses, support the claim that BETA improves accuracy and efficiency relative to prior TabPFN variants and other state-of-the-art tabular models. Claims And Evidence: The claims are supported by comprehensive experiments over 200+ datasets, detailed ablation studies, and analyses of bias-variance decomposition. The experimental results, significance tests (e.g., Wilcoxon-Holm tests), and efficiency metrics (inference time, parameter count) provide empirical support. While the empirical evidence is good, some claims regarding theoretical guarantees for bias and variance reduction rely on standard decompositions (i.e., the bias-variance decomposition, Eq. (13), is well established in the literature). Additional theoretical justification or analysis could further strengthen the bias and variance reduction claims. Methods And Evaluation Criteria: The introduced methods are sound. The use of diverse benchmark datasets (e.g., the TALENT benchmark and high-dimensional datasets) along with multi-seed experiments and statistical significance tests is appropriate for evaluating the proposed methods. A more in-depth sensitivity analysis regarding hyperparameter settings (e.g., number of bootstrapped samples or encoder paths) would enhance the evaluation. Theoretical Claims: The theoretical aspects used in this paper are well-established in the literature. Experimental Designs Or Analyses: The experimental setup is robust. The authors use multiple random splits and seeds and compare against a range of competitive baselines (classical and deep learning models). The bias-variance analysis is clearly presented; efficiency metrics (inference time, checkpoint size) are well reported. Some important experimental details (e.g., hyperparameter ranges and exact training protocols) are in the supplementary material. Including a brief summary in the main text would improve the clarity. Supplementary Material: The supplementary materials support the claims in the main paper. Relation To Broader Scientific Literature: The paper builds directly on prior work in tabular deep learning, notably TabPFN, TuneTables, and MixturePFN, while incorporating ensemble methods and parameter-efficient fine-tuning techniques known from the broader deep learning literature. There is a strong focus on TabPFN techniques; the combination of ensemble methods with advanced embedding techniques (e.g., https://arxiv.org/abs/2411.01645) could be further analyzed and discussed. Essential References Not Discussed: There are several recent papers exploring how the rich contextual and semantic representations generated by large language models can be integrated into prediction models for tabular data. Mentioning this stream of work in the introduction would inform and help readers understand the broader context. Other Strengths And Weaknesses: Strengths: 1. The approach builds on well-established techniques. 2. The experimental evaluation is extensive and covers a wide range of datasets and settings. 3. The integration of ECOC to overcome multi-class limitations is well-motivated. Weaknesses: 1. The theoretical foundation of the method could be stronger. 2. Some critical implementation details and hyperparameter sensitivity analyses should be summarized in the main part. 3. The limitations section is brief, and further discussion on potential challenges (e.g., distribution shifts, application to regression) would be welcome. 4. More recent work on improving predictions on tabular data through contextual embeddings could be included and discussed. Other Comments Or Suggestions: 1. A brief discussion on the computational trade-offs of adding more encoder paths versus performance gains. 2. See weaknesses above. ## After Rebuttal The authors do address significant and well-known shortcomings of TabPFN and demonstrate consistent improvements on a wide benchmark. However, I do strongly agree with Reviewer 3Q5h that the code should be public. I strongly assume the link to the open-source code will be made available upon publication. Otherwise, there would be a serious replicability limitation. In good faith that the authors will open-source their code and adjust the paper according to the reviews, I would maintain my current score. Questions For Authors: 1. Can you elaborate on how the combination of multiple encoder paths specifically contributes to variance reduction? Are there any theoretical bounds or insights that could be provided? 2. How sensitive is the performance of BETA to the choice of the number of bootstrapped samples and encoder paths? Would you consider adding a sensitivity analysis section? 3. You briefly mention the limitation regarding regression tasks. What modifications would be needed to extend BETA to regression problems, and do you have preliminary results in that direction? 4. Have you tested BETA under scenarios with distribution shifts (covariate or concept drift)? If not, could you discuss potential approaches to handle such scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. In this rebuttal, we address the reviewers' suggestions and concerns in a Q&A format: --- **Q1: Sensitivity Analysis** --- **A1**: We sincerely appreciate the reviewer’s valuable suggestion. To address the sensitivity of BETA’s performance to the **number of bootstrapped samples** and **encoder paths**, we have conducted additional experiments. - In **Appendix D.3**, we provide an analysis of the **number of encoders** and other key components in BETA. - Additionally, we include performance rankings for BETA across various datasets from the **Tiny Benchmark2** classification dataset [1], where we vary the **number of bootstrapped samples**. A table summarizing the results is presented below: | size | 500 | 1000 | 2000 | 3000 | tabpfn | | ------------ | ---- | ---- | ---- | -------- | ------ | | average rank | 3.39 | 2.87 | 2.57 | **1.91** | 4.24 | - We also summarize the performance of Beta with different **number of encoders** and their corresponding **average training time** across the datasets of **Tiny Benchmark2**. Specifically, we show the **average percentage change in performance relative to the original TabPFN**. | number of encoders | 1 | 2 | 4 | 8 | 16 | | -------------------- | ---- | ---- | ----- | ----- | ----- | | relative improvement | 1.7% | 2.6% | 3.19% | 3.23% | 3.76% | | finetuning time (s) | 31 | 48 | 73 | 126 | 203 | The result demonstrates that both **the number of bootstrapped samples** and **the number of encoders** lead to **better performance**, but they also **increase computational costs**. We hope these additional details address the reviewer’s concerns. Additionally, we will provide a brief summary of the **experimental details and sensitivity analysis** in the main text of the final version of the paper. --- **Q2: Recent Work on Contextual Embeddings** --- **A2**: We sincerely thank the reviewer for the suggestion. We have noted that the paper referenced by the reviewer, [2], provides a detailed analysis of the **combination of ensemble methods with advanced embedding techniques**. We will incorporate a discussion of this work and related research in the final version of the paper. Additionally, we plan to explore how these techniques could be integrated with BETA in our future work. We appreciate the reviewer’s insight and will ensure this topic is adequately addressed in the final version of the manuscript. --- **Q3: Extending BETA to Regression Problems** --- **A3**: Thank you for your insightful comment. We applied BETA to regression datasets by using a pre-trained regression version of PFN, where the final classification head has an output dimension of 1, while keeping the other modules unchanged. The only modification was replacing the loss function with MSE loss and fine-tuning it on downstream regression datasets. We evaluated the regression version of BETA on 100 regression datasets from **[1]** and compared the **average rank** to other tuned methods (we only display a subset of the methods for clarity) presented in **[1]**. Our results show that the regression version performs well, although its performance is slightly lower compared to its classification counterpart. |Method|catboost|tabr|Beta|FT-T|XGBoost|MLP|KNN| |-|-|-|-|-|-|-|-| |avg rank|7.3|9.2|9.3|9.8|10.0|12.7|17.8| We will clarify these findings and potential directions for future work in the final version of the paper. Thank you again for your valuable suggestion! --- **Q4: Response to Testing BETA under Distribution Shifts** --- **A4**: Thank you for your thoughtful question. We have tested BETA on three classification datasets from **TabRed** [3] and integrated the **split method** (Beta-split) and **temporal embedding techniques** (Beta-temporal) discussed in [4] into BETA. The table below presents the **AUC results** of **BETA** on these datasets, showing that the plugin techniques are effective for improving BETA’s performance. Additionally, these results highlight the potential of **ICL methods** for addressing **temporal shift** scenarios. |dataset|Beta|Beta-split|Beta-temporal|MLP| |-|-| -|-|-| | ecom-offers|0.5692|0.6264| **0.6348**|0.5866| | homesite-insurance|0.9600| **0.9602** |0.9521| 0.9404 | | homecredit-default|0.6114| **0.6710** |0.6637| 0.4730 | We will include these findings and further clarify this in the final version of the paper. Thank you again for the valuable suggestion! [1] A Closer Look at Deep Learning Methods on Tabular Datasets. 2024 [2] Enriching Tabular Data with Contextual LLM Embeddings: A Comprehensive Ablation Study for Ensemble Classifiers. 2024 [3] Tabred: A benchmark of tabular machine learning in-the-wild. 2025 [4] Understanding the Limits of Deep Tabular Methods with Temporal Shift. 2025 --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ careful consideration of my concerns and questions. I will increase my score to “accept”. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition and support of our work. We are grateful that you found our paper *well-motivated*, with *sound* methods and claims that are *supported by comprehensive experiments*. We sincerely appreciate your thoughtful suggestions, which are very valuable for improving the quality and completeness of our work. Thank you again for taking the time to review our paper.
Summary: The authors introduce BETA, a TabPFN variant featuring multiple improvements to the original TabPFNv1 model. BETA introduces encoder-based fine-tuning, multiple encoder fine-tuning, batch-ensemble encoding for inference optimization, inference-time bootstrapped sampling, and an error correcting output code strategy to extend model support beyond 10 target classes. Across over 200 datasets and 15 seeds, BETA outperforms a suite of 28 baseline methods with 0.05 CD significance. BETA also performs well on high-dimensional datasets and maintains TabPFNv1-level inference throughput and a small model artifact of under 1 MB. The authors additional show ablation studies covering variants of TabPFN in Figure 2 with discussion of bias-variance tradeoff impact. Claims And Evidence: The author's put substantial effort in providing convincing evidence for the claims they make in the paper. Methods And Evaluation Criteria: Yes. The evaluation criteria, proposed methods, and comparison baselines all make sense for the target application. Theoretical Claims: While I have not dived deep into the specifics of the theoretical claims, the approaches described make sense and appear reasonable. Experimental Designs Or Analyses: The experimental design appears solid and sufficiently large-scale to warrant statistically significant take-aways. I do however have some major reservations based on the specifics of how the experiments were conducted to ensure a fair comparison for all methods, which I detail below and look forward to the authors providing clarifications on: ### "Each dataset is randomly split into training, validation, and test sets with proportions of 64%, 16%, and 20%, respectively." & "To ensure a fair comparison, all TabPFN variants, including BETA, are evaluated using their default hyper-parameters without additional tuning" - What is the validation set used for? TabPFN generally doesn't need to early stop or hyperparameter tune, so are we just throwing away useful data as unused validation data instead of incorporating it into the train_data to fit a better model? - Is the validation dataset combined with the training data for the final model fit after HPO? ### We report accuracy as the evaluation metric, where higher accuracy indicates better performance. - Why accuracy? Generally ROC AUC or logloss are more informative. Accuracy can be highly sensitive to the decision threshold, which is not something every model includes as a tunable parameter. Accuracy also doesn't do well in measuring how well calibrated the model is. - Were the models early stopped to maximize accuracy? For example, LightGBM, XGBoost, RealMLP, etc. can all be early stopped using validation data to optimize a target metric. This would greatly improve their results. Were the fine-tuned TabPFN methods (including BETA) early stopped to optimize accuracy? If so, were these methods ever refit on all of the data afterwards using the found optimal epoch/iteration? ### SOTA Claim & Bagging - BETA essentially implements internal bagging. Bagging is well known to be virtually universally helpful in improving the strength of models. This also applies to models such as RealMLP, LightGBM, CatBoost, etc. Have the authors considered also fitting bagged versions of the baselines to more faithfully compare top-end performance potential of the methods? I would suspect BETA to at best only marginally improve with explicit bagging (since it already incorporates internal bagging), whereas methods such as RealMLP, CatBoost, etc. will likely benefit greatly from it. This may somewhat change the take-aways in the paper. Note that even if BETA is no longer #1, that does not become a weakness of the paper. ### SOTA Claim & HPO - BETA and the TabPFN methods are not hyperparameter tuned, however the other methods are. Tuning via a single train/val split is suboptimal and can lead to major overfitting. Tuning via cross-validation is much more preferable and would lead to better results, especially on smaller datasets. Have the authors considered doing this to compare with stronger baselines? ### General comment on experimental design I suggest the authors read a recent paper detailing shortcomings of many recent tabular benchmark studies, including those that this paper builds its experimental design off of: https://arxiv.org/pdf/2503.09159 . The paper shares similar concerns that I have mentioned above, such as the usage of a holdout set instead of cross-validation. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: The contributions are highly related to the TabPFN and tabular foundation model literature, many of which are recent concurrent works (TabPFNv2, Attic, TabICL, etc.). Essential References Not Discussed: The authors do a good job of highlighting relevant works. Other Strengths And Weaknesses: I like the usage of Batch Ensemble for compute efficiency. In general the authors apply many clever tricks which are very sensible. It would have been nice to have an ablation study of iteratively adding the BETA improvements to TabPFN, showing the impact of each component that make up BETA. This would be a very strong addition and would help justify each component (whereas currently it is unknown if certain components are largely superficial / non-impactful). Other Comments Or Suggestions: I agree with the authors that TabPFNv2 is concurrent work, however I think it would elevate the paper if it is possible to include a version of Figure 4 with TabPFNv2 results in the appendix. Methods that are missing from Figure 4 that could be useful to include: - TabPFNv2 - TabForest - TabForestPFN - TabPFNMix - Attic - TabICL - An AutoML system, as done in the TabPFNv2 paper - Bagged baseline models Questions For Authors: Refer to the other sections. My score is contingent on the notable questions and concerns being addressed. The paper is generally of very high quality, but with some major potential pitfalls that might lead to incorrect conclusions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. In this rebuttal, we address the reviewers' suggestions and concerns in a Q&A format: --- **Q1: What is the validation set used for? Were the models early stopped to maximize accuracy?** --- **A1**: We follow [1, 2, 5, 6], where the validation set is used only for hyperparameter tuning and early stopping, not for training. For example, for baselines like LightGBM, XGBoost, and FT-Transformer, we use the validation set for early stopping. For BETA and TabPFN variants, we use it for early stopping but do not refit on the combined training and validation set. --- **Q2: Why accuracy?** --- **A2**: We followed the evaluation metrics used in [1, 2, 5, 6]. We also computed AUC-based average ranks (see below). RealMLP's lower AUC ranking may be linked to label smoothing, aligning with findings in [3]. |Method|avg rank| |-|-| |Beta|5.1| |catboost|7.3| |ModernNCA|7.64| |LightGBM|8.92| |RealMLP|10.04| |TabPFN|10.38| |FT-T|10.92| |MLP|12.19| --- **Q3: Bagging** --- **A3**: We agree that other baselines may benefit from bagging. However, since TabPFN can **change context without retraining**, we only applied inference-stage bagging for BETA. As a result, for other baselines, bagging would **significantly increase the cost**—scaling nearly linearly with the number of base models. To evaluate bagging’s impact on other models, we trained 16 classifiers per dataset with different random seeds on subsets of size max(0.8 × train_size, 1000) for CatBoost, LightGBM, and RealMLP, ensuring that each subset was at least as large as the ones used by BETA during inference. |Method|avg rank| |-|-| |Beta|6.1| |realmlp-bagging|6.97| |catboost-bagging|7.42| |catboost|8.06| |realmlp|8.91| |lightgbm-bagging|9.37| |lightgbm|10.01| We appreciate your suggestion and will further enhance the discussion on Bagging in the final version. --- **Q4: HPO** --- **A4**: We followed the hyperparameter tuning approach of [1, 2, 5, 6]. We carefully read the article you referenced [4] and implemented the 5-fold CV evaluation protocol from [4] (Section 2). However, we found this protocol to be significantly **more computationally intensive**. Due to computational constraints, this approach was applied only to XGBoost, RealMLP, and Beta on classification datasets with fewer than 3,000 rows (from [1]). For XGBoost and RealMLP, we performed HPO using 5-fold CV, then averaged the predictions from the 5 trained models. For BETA, we maintained its default hyperparameters but trained five models on different training folds and averaged their predictions. The results, shown in the table below, indicate that cross-validation does improve model performance, even for methods like BETA that do not require HPO, though the gain for BETA is relatively modest. We sincerely appreciate your valuable feedback on HPO strategies, and in the final version of the paper, we will conduct a more detailed evaluation of these effects. |Method|Avg AUC| |-|-| |XGBoost|0.8452| |XGBoost-cv|*0.8470*| |RealMLP|0.8610| |RealMLP-cv|*0.8739*| |Beta|0.8759| |Beta-cv|*0.8785*| --- **Q5: I agree with the authors that TabPFNv2 is concurrent work, however I think it would elevate the paper if it is possible to include a version of Figure 4 with TabPFNv2 results in the appendix.** --- **A5**: In the final version of the paper, we plan to include additional baselines. Due to time constraints, we evaluated TabPFN-v2 and TabICL, both using default settings. To ensure a fairer comparison, we also tested subsampled variants (TabPFN-sub, TabICL-sub) with 16 training subsets and different preprocessing strategies. Results show that full versions of TabPFN-v2 and TabICL approach BETA’s performance, while **subsampled versions perform significantly worse**. Our method is currently limited by memory constraints, but a well-optimized TabPFN-v1 could allow larger context sizes, potentially further improving results. This suggests that our method is currently constrained by engineering-related memory limitations. If a highly optimized version of TabPFN-v1 were available, we believe our approach could scale to even larger contexts, potentially further improving performance. |Method|avg rank| |-|-| |Beta|8.65| |PFN-v2|8.76| |TabICL|9.16| |PFN-v2-sub|12.56| |TabICL-sub|12.76| --- **Q6: Sensitivity Analysis** --- **A6**: Please refer to A1 for reviewer 2vtp. [1] Revisiting deep learning models for tabular data. 2021 [2] A Closer Look at Deep Learning Methods on Tabular Datasets. 2024 [3] Better by default: Strong pre-tuned mlps and boosted trees on tabular data. 2024 [4] Unreflected Use of Tabular Data Repositories Can Undermine Research Quality. 2025 [5] On Embeddings for Numerical Features in Tabular Deep Learning. 2022 [6] TabR: Tabular Deep Learning Meets Nearest Neighbors in 2023. 2023 --- Rebuttal Comment 1.1: Comment: Exceptional rebuttal. The authors have addressed nearly all of my concerns and have gone to great lengths to improve their paper by incorporating bagging, cross-validation HPO, TabPFN-v2 and TabICL into the paper. Due to the major improvements, I have increased my score to a 4, and I eagerly await the Beta model release to try it out myself! --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your recognition, encouragement, and kind words. We are especially grateful that you decided to increase your score for our paper. We are truly grateful that you highlighted the strengths of our work, noting that “the approaches described make sense”, “the experimental design appears solid and sufficiently large-scale to warrant statistically significant take-aways”, “many clever tricks which are very sensible”, and that “the paper is generally of very high quality.” Your thoughtful feedback and suggestions are extremely valuable for further improving the quality of our work. From your comments and the depth of your insights, it is clear that you are an expert in deep learning for tabular data. After acceptance, we will take time to organize our code for better readability and usability, and release it to support the development of tabular foundation models and the broader community. Thank you again for your time, efforts, and generous support.
Summary: This paper narrows its study to an adaption method for TabPFN, which incorporates a fine tuning encoder, boostrapped sampling in the inference stage into the whole pipeline. The BETA is able to solve the bias and variance and achieve comparable performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: A narrowed contribution to certain model Essential References Not Discussed: Some foundation tabular model is missing, such as [1] and [2] [1] Kim, M.J., Grinsztajn, L. &amp; Varoquaux, G.. (2024). CARTE: Pretraining and Transfer for Tabular Learning. Proceedings of the 41st International Conference on Machine Learning [2]Yang, Yazheng et al. “UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science.” International Conference on Learning Representations (2023). Other Strengths And Weaknesses: Pros: 1. This work is a new variant of TabPFN, which aims to solve the bias and variance problem in the original TabPFN model. The problem is well motivated by several explorative experiments. 2. The method incorporates previous techniques to mitigate both bias and variance issues, which is easy to understand. 3. Experiments demonstrate the effectiveness of the proposed method. Cons: 1. The proposed BETA considers encoder to mitigate bias and bootstrapped sampling for variance, which are the techniques already used. Only incorporating both techniques into one pipeline makes this work limited in novelty. 2. The table 1 gives the comparions between BETA and other variants of TabPFN. However, there is no evidence to show whether the claims are justified. 3. The experimental design is not so convicing. For example, scale to large datasets, BETA performs worse than LocalPFN on the large scale datasets. But the authors claims that on more other datasets, BETA demonstrates superior performance desipite that this experiment verify the BETA should perform better on large scale datasets. On investigating that BETA is able to reduce bias and variance, the Figure 2 introduce some settings based on TabPFN, not the variants of table 1, which makes the conclusion is not so convicing. Other Comments Or Suggestions: 1. The experiments can be carefully designed according to the claims of the proposed method. 2. More recent studies on foundation model or transfer model should be discussed. Questions For Authors: Why are the encoders in fine-tuning stage placed before TabPFN instead of after it? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. In this rebuttal, we address the reviewers' suggestions and concerns in a Q&A format: --- **Q1: Limitation of Novelty** --- **A1**: BETA builds on existing techniques, with its novelty lying in breaking key limitations of TabPFN while maintaining inference efficiency. Simplicity and effectiveness are highly valuable, especially for tabular data: 1. Overcoming TabPFN’s Limitations: - (a) **Handling Arbitrary Feature Dimensions**: We introduce multiple lightweight encoders, incorporating *batch ensemble* techniques, to map raw features into a unified dimension while keeping TabPFN’s parameters frozen. - (b) **Scaling to Large Dataset**: Bagging extends TabPFN to datasets with more samples. - (c) **Handling Large-Class Problems**: Assigning ECOC codes to different encoders overcomes TabPFN’s class limitations. 2. BETA remains compatible with PFN-style batching, ensuring minimal extra model inference computational cost. 3. An effective combination of techniques for TabPFN that addresses key weaknesses and significantly outperforms widely used, finely tuned baselines is a meaningful contribution worthy of further study. --- **Q2: But the authors claims that on more other datasets, BETA demonstrates superior performance desipite that this experiment verify the BETA should perform better on large scale datasets.** --- **A2**: This experiment aims to demonstrate BETA’s improved generalization on large datasets. As shown in Figure 5, BETA’s performance gain over TabPFN grows with dataset size. However, on the largest datasets, LocalPFN performs better because it assigns a unique context to each sample, preventing context sharing and making it **incompatible with the highly parallelizable batching strategy used in PFN**. While this boosts performance, it comes at the cost of much lower efficiency—as shown in Table 3, inference time increases significantly. LocalPFN is a valuable contribution, but this highlights a trade-off on large datasets: BETA offers better efficiency, while LocalPFN prioritizes local adaptation at a high computational cost. --- **Q3: Figure 2 introduce some settings based on TabPFN, missing the variants of table 1.** --- **A3**: In Section 2.2, we categorize major TabPFN variants into Context Selection and Fine-tuning. As shown in Table 1, all listed variants fall into these two categories. As stated in Section 2.3, we selected TabPFN-KNN and TabPFN-Finetune as **the most representative methods** for these categories, with other approaches being minor modifications or combinations of these two. To ensure clarity and avoid redundancy, Figure 2 presents results for these key methods. The following table shows the relative percentage changes in bias and variance (**negative values indicate reduction**) of Beta and other variants compared to TabPFN, averaged over the dataset sizes used in Figure 2, on the *Adult* and *Bank* datasets. Since TabPFN's absolute variance on these datasets is smaller than its bias, the percentage changes in bias are much smaller than those in variance. |Adult|Beta|TuneTables|TabForestPFN|LocalPFN|MixturePFN|TabPFN-Bagging| |-|-|-|-|-|-|-| |Bias|**-4.29** | -1.58 | -2.96 | -3.67 | -3.47 |+1.75| |Variance|**-16.52** | +17.93 | +7.76 | +4.38 |+5.26|-10.16| |Bank||||||| |Bias|**-1.48** |-0.25|-1.12|-1.39|-1.09 |+0.39| |Variance |**-39.07**|+25.13|+18.47|+23.02 |+10.35|-9.37| --- **Q4: No evidence to show the claims in Table 1.** --- **A4**: These claims are derived from the specific improvement strategies each method applies to TabPFN. For example, none of the other methods adjust TabPFN’s input representation, meaning they cannot handle high-dimensional datasets. Additionally, evidence supporting the claims about **bias and variance** can be found in *Figure 2* and the table in *A3* above. We greatly appreciate your suggestion and will further clarify Table 1 in the final version to ensure the claims are well-supported. --- **Q5: Why are the encoders in fine-tuning stage placed before TabPFN instead of after it?** --- **A5**: 1. Placing encoders before TabPFN allows it to handle **arbitrary feature dimensions** by mapping raw features to a fixed dimension. 2. Our preliminary explorations found that adding encoders after TabPFN provided almost no improvement. 3. Moreover, placing them before TabPFN integrates effectively with batch ensemble techniques, enhancing performance. --- **Q6: Some foundation tabular model is missing** --- **A6**: CARTE models tabular data as a graph, embedding column names and entries with a graph-attentional network. UniTabE encodes column names, data types, and cell values as tokens using an encoder-decoder architecture with contrastive learning. Unlike TabPFN, which relies on a pre-trained Transformer without semantic modeling, these models incorporate semantic information. We will enhance the discussion on tabular foundation models in the final version. --- Rebuttal Comment 1.1: Comment: After carefully reading this paper again and all the rebuttals \& responses from other reviewers, the reviewer decides to maintain the initial decision and **strongly recommend rejection**. The reasons go as follows: 1. **No open-source contribution.** This work is good engineering, unleashing and adjusting an existing model, TabPFN. However, it offers no implementation and detailed guidelines for others to reproduce its experiments. Considering the complex structures and many tricks introduced, the reviewer remains highly skeptical about how robust this method is and whether it genuinely outperforms other baselines, as the authors claimed. 2. **Many tricks are introduced into the model without proper justification.** For instance, how does ECOC help with multi-class problems? What are the unique challenges of introducing these designs? What are the benefits? What are the potential drawbacks if this trick is not introduced? Remember Occam's Razor. 3. **The novelty is limited.** It is no secret that ensemble learning can reduce variance. So what's the theoretical contribution in this paper? The reviewer feels there is little. Simplicity and Effectiveness are solid contributions when they are addressing well-motivated problems. But this is not the case for this paper. 4. **There are so many overclaims in the paper.** For instance, *No Additional Inference Cost* in Table 1 is highly misleading. How can ensemble learning have no additional inference cost than naive inference? Additionally, *Handles High-Dimensional Data* is also overclaimed. No evidence supports this claim. In the rebuttal, the author claims *For example, none of the other methods adjust TabPFN’s input representation, meaning they cannot handle high-dimensional datasets.* I don't see the correlation here. If none of the previous literature proves this, the author must conduct empirical experiments to support this strong claim. 5. **There is no takeaway from this paper.** After reading it, the reviewer learned nothing but the hidden message: **This is a good paper, we propose an awesome model, accept us!**. What are the unique pros and cons of adopting TabPFN instead of tree-based or non-pre-trained models? Is this paper addressing these cons? Is the current evaluation solid enough to claim that we are progressing down-to-earth in this domain? To SAC and AC: I am humbly asking you to pay more attention to this paper and make your decision while thinking about the signal sent to the community. If this paper is really accepted by ICML, it will negatively stimulate similar papers into the community. Do we really want researchers to combine tricks and yield on benchmarks instead of reflecting the bigger picture in the community? While both aspects are important, this paper is a bit extreme. I failed to take away any messages or reflection from it. Also, I find it hard to be persuaded by three other reviewers' opinions, even though they **unanimously increased their overall scores to 4.** --- Reply to Comment 1.1.1: Comment: We thank the reviewer for your time for the review process. While we appreciate your feedback, we must respectfully address what appears to be a fundamental misunderstanding of both our work and the TabPFN literature: > No open-source contribution: 1. Thank you for pointing out that “Beta is good engineering” and for acknowledging our efforts to unleash TabPFN. 2. Regarding your comment that “it offers no ... reproduce its experiments”, we respectfully disagree. **Section 3 provides complete implementation details of Beta, and Appendix B.4 specifies all hyperparameters used in our experiments**. We believe these sections offer sufficient information to support reproducibility. 3. Regarding robustness, our method outperforms well-tuned baselines on one of the largest tabular benchmarks, as noted by other reviewers. **Detailed ablation studies (Appendix D.3) and additional results (Section A1 in our response to Reviewer 2vtp)** further support its effectiveness and robustness. > Many tricks are introduced into the model without proper justification. 1. *How does ECOC help...drawbacks without ECOC?*: Due to architectural constraints, TabPFN and its variants are inherently limited to classification tasks with at most 10 classes (as noted in Lines 355, 974, and 851). To overcome this limitation, ECOC is introduced, enabling Beta to handle >10-class problems. 2. Regarding Occam's Razor: Introducing ECOC is necessary to overcome TabPFN's architectural constraint (10-dim logit). It does not violate Occam’s Razor as it is a slight and elegant modification to handle >10-class tasks. > It is no secret ... can reduce variance. While bagging may reduces variance, our method is specifically designed to address both bias and variance in TabPFN, which is non-trivial as shown in Figure 2. Most prior variants only improve one side while increasing inference time. > Overclaims? 1. *No additional inference cost*: As stated in L128, our comparison is against the **ensemble-style TabPFN commonly used in practice**, not single-pass inference. This ensemble leverages *PFN-style batching* (Line 82, right), allowing multiple transformed inputs to share context and keeping inference time close to that of the single-pass version. This stands in contrast to LocalPFN [2], which incurs higher computational costs due to its use of non-shared contexts—leading to an inference count proportional to the size of the test set, incurs additional cost. Beta follows this same efficient paradigm: for the m encoders in Beta, each encoder performs inference only once, resulting in m predictions. **Thus, our design keeps the high-efficiency benfit of ensemble-style PFN and introduces no additional inference cost compared to ensemble-style TabPFN.** 2. *Handles High-Dimensional Data...literature proves this, ...* Beta’s ability to handle high-dimensional data is an inherent design feature, not an overclaim. TabPFN has a fixed 100D input limit and cannot process >100D data without adaptation (**Line 85, right**). The reviewer's comment reveals a **lack of understanding of TabPFN's architecture and suggests a lack of careful reading of our paper**. > What are the unique pros and cons...instead of tree-based or non-pre-trained models? 1. As stated in the Introduction, we have outlined the motivation for studying pre-trained models, which has been thoroughly covered in prior work [1,3,4]. 2. Due to its architectural constraints, TabPFN cannot handle large-scale data (Line 144), high-dimensional features (Line 85), or classification tasks with >10 classes (Lines 355, 974, and 851). Beta addresses these limitations by leveraging bagging, multiple lightweight encoder fine-tuning, and ECOC, each tailored to the specific shortcomings of TabPFN. > Do we really want researchers to...bigger picture in the community? As the field of tabular foundation models is still in its early stages, contributions like ours help stimulate progress and attract broader research interest by overcoming key architectural limitations of TabPFN, such as its fixed input dimensionality and the 10-class restriction. By addressing these constraints and demonstrating strong empirical performance, our work expands the practical applicability of TabPFN and encourages further exploration and innovation in the development of tabular foundation models. While our method achieves strong results, it is not merely a collection of tricks aimed at improving benchmarks. Instead, it provides a principled and extensible solution to the design limitations in TabPFN, with implications that go beyond performance metrics and toward broader usability and model design in the tabular foundation model domain. [1] Tabpfn: A transformer that solves small tabular classification problems in a second [2] Retrieval & fine-tuning for in-context tabular models [3] Why In-Context Learning Transformers are Tabular Data Classifiers [4] When Do Neural Nets Outperform Boosted Trees on Tabular Data
Summary: The manuscript first analyzes the generalization error with a bias-variance decomposition, finding both bias and variance have a non-negligible contribution to the overall error. They then propose an extension tackling both error sources named BETA, which trains multiple dataset-specific encoders (to tackle bias) while averaging the predictions resulting from the different encodings (to minimize variance). Additionally, to further reduce variance during inference, they also use bootstrapped sampling, meaning several random subsets of the training subset as support sets and averaging of the resulting prediction. To scale to more classes, they use Error-Correcting Output Codes. Claims And Evidence: The claim "Scales to Large Datasets" I am not sure if it is supported, do you mean through the error-correcting output codes? do they affect scaling behavior if there are only two classes? It seems unclear to me how "scales to large datasets" is meant, given "handles high-dimensional data" and "adapts to more than 10 classes" is separately mentioned. [Update: explained by authors to mean the bagging part] Methods And Evaluation Criteria: Benchmark itself seems suitable as far as I can judge it. I am missing an evaluation of the overall computation time, so finetuning time plus inference time compared to other methods, Seems very relevant here to me. [Update: promised by authors to add] Theoretical Claims: None Experimental Designs Or Analyses: See methods Supplementary Material: No Relation To Broader Scientific Literature: Results on one method to further improve tabular in context prediction and marry it with partial finetuning. Essential References Not Discussed: None I am aware of Other Strengths And Weaknesses: See below Other Comments Or Suggestions: "To mitigate this, TuneTables (Feuer et al., 2024) offers a more efficient alternative by either fine-tuning the entire model or applying prompt-tuning (Lester et al., 2021), which adjusts only a small set of parameters, reducing resource consumption" -> Formulation seems strange to me, TuneTables only optimizes the prompt, so what does this "either finetuning the entire model" what do you mean here? [Update]: explained by authors. thanks "Minimizing L_total in Equation 6 ensures that the model jointly trains all encoders to generate diverse yet TabPFN compatible latent representations, thus reducing variance and providing more stable predictions." I don't see how it "ensures" "diverse representations", there is nothing enforcing that as far as I see, so ensure seems a misleading formulation here (as one could imagine losses that would ensure it). In Fig.3 , "Inference Stage" is not centered in the box, also "Mean"/"Loss" isn't? [Update: change promised, thanks] Figure 5, I think the x axis is just confusing in this way, it should represent actual dataset sizes, and then maybe one could just make a scatter plot of dataset size vs relative improvement (notice also typo in y axis). [Update: change promised, thanks] Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. In this rebuttal, we address the reviewers' suggestions and concerns in a Q&A format: --- **Q1: Finetuning time plus inference time compared to other methods.** --- **A1**: In **Table 3**, we have provided a comparison of **inference time, average rank, and the number of learnable parameters**. To address this point more comprehensively, we include the **fine-tuning/training** time for **BETA** as well as other methods listed in Table 3. Notably, since **Beta only fine-tunes the encoder parameters**, it has a clear advantage in both **fine-tuning and inference efficiency** compared to other variants. | Methods | Beta | LocalPFN | MixturePFN | FT-T | TabR | | ---------------------------- | ----- | -------- | ---------- | ---- | ---- | | finetuning/training time (s) | 62.29 | 186.73 | 83.14 | 58.9 | 121 | | inference time (s) | 0.91 | 12.51 | 3.24 | 0.36 | 0.38 | Additionally, it is important to highlight that methods like **FT-T and TabR** underwent an extensive hyperparameter tuning process in [2], spanning **100 trials**. As a result, their actual tuning time is approximately **100 times** their reported training duration. --- **Q2: Claim: Response to "Scales to Large Datasets"** --- **A2**: We refer to datasets with a **large number of samples** as **large datasets**. TabPFN is inherently limited by its **attention mechanism over instances**, leading to high memory overhead. As a result, it typically requires subsampling the training set to remain computationally feasible. To mitigate the impact of context size on model performance during inference, we introduce bagging for TabPFN. Additionally, since TabPFN is pretrained only on synthetic datasets with fewer than 1000 rows, we incorporate multiple lightweight encoders to reduce bias, enabling the model to better scale to large datasets. --- **Q3: About TuneTables: Formulation seems strange to me, TuneTables only optimizes the prompt, so what does this "either finetuning the entire model" what do you mean here?** --- **A3**: In **classification tasks**, TuneTables employs **prompt tuning**, where *“which adjusts only a small set of parameters, reducing resource consumption”* specifically refers to prompt tuning. However, in their **extension to regression tasks**, they adopt **end-to-end fine-tuning**. This is why we mentioned *"fine-tuning the entire model."* We will clarify this distinction in the final version. --- **Q4: I don't see how it "ensures" "diverse representations", there is nothing enforcing that as far as I see, so ensure seems a misleading formulation here (as one could imagine losses that would ensure it).** --- **A4**: We appreciate the reviewer’s observation. The goal is to ensure that each encoder generates **TabPFN-compatible latent representations**, while diversity is encouraged through different initializations. Additionally, **TabM** [1] has discussed diversity in base models, and we further analyze BETA’s encoder diversity through **embedding visualizations in Appendix Figure 9**, showing that the learned representations exhibit diversity. We will provide a detailed explanation of **diversity** and **TabPFN-compatible latent representations** in the final version. --- **Q5: Response to Figure Presentation.** --- **A5**: We appreciate the reviewer’s helpful suggestions. - **Figure 3**: We will adjust the alignment of **"Inference Stage"**, **"Mean"**, and **"Loss"** to ensure proper centering within their respective boxes. - **Figure 5**: We will modify the **x-axis** to directly represent **actual dataset sizes** and consider using a **scatter plot of dataset size vs. relative improvement** for better clarity. Additionally, we will correct the **typo in the y-axis**. These improvements will be incorporated into the final version. We sincerely appreciate the reviewer’s **careful and constructive feedback**. We believe these valuable suggestions will **further enhance the quality** of our paper, and we will incorporate the necessary revisions accordingly. [1] Tabm: Advancing tabular deep learning with parameter-efficient ensembling. 2025. [2] A Closer Look at Deep Learning Methods on Tabular Datasets. 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your explanations and the promises of the manuscript improvements, I have updated my review accordingly. I would still like a change of wording in the "ensure [...] diverse representations" part, e.g. to something like "Minimizing L_total in Equation 6 jointly trains all encoders to generate TabPFN-compatible latent representations, with the different encoder initializations encouraging diverse representations and thereby reducing variance and providing more stable predictions." --- Reply to Comment 1.1.1: Comment: We greatly appreciate the score improvements from the reviewer. We are glad that our response was able to address your concerns, and thank you for recognizing our efforts. It is clear from your detailed comments and thoughtful responses that you carefully read and thoroughly considered our work. We truly appreciate your responsible and constructive reviewing. Your suggestions have been very helpful in improving the quality of our paper. We will revise the final version to incorporate your suggestions, especially by improving the clarity of the “ensure [...] diverse representations” part as you pointed out. Once again, thank you for your time, effort, and valuable feedback.
null
null
null
null
null
null
OrthoRank: Token Selection via Sink Token Orthogonality for Efficient LLM inference
Accept (poster)
Summary: This paper introduces OrthoRank, a dynamic token selection method that exploits the relationship between sink tokens and other tokens to improve LLM inference efficiency. The authors observe that as layers deepen in LLMs, the cosine similarity between normalized hidden states of the sink token and other tokens increases, while the sink token's state remains relatively static. Based on this, they propose selecting tokens with greater orthogonality to the sink token for computation, bypassing others except for KV calculations. Experiments demonstrate that OrthoRank achieves lower perplexity and higher zero-shot accuracy compared to baselines. ## update after rebuttal The authors have addressed my main questions and concerns. For me, this is a self-consistent and complete work. I have also carefully read and am aware of the other reviewers' issues. Overall, I maintain my original borderline accept score. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: The paper's main theoretical claim is the derivation of token importance based on orthogonality to the sink token. It appears sound, assuming the stated approximation that normalized hidden states have approximately equal norm. Experimental Designs Or Analyses: yes Supplementary Material: there are no supplementary materials. Relation To Broader Scientific Literature: - This paper extends the concept of attention sinks. - This paper provides an alternative to layer pruning methods. Essential References Not Discussed: None Other Strengths And Weaknesses: **pros:** - Method is applicable across different model architectures and sizes. - Empirical evidence shows performance improvements over existing layer pruning techniques. - The paper is well written and easy to follow. **cons:** - see the questions below Other Comments Or Suggestions: - see the questions below Questions For Authors: - Why should orthogonality be the primary criterion for token importance rather than other metrics like attention scores, gradient-based importance, or semantic significance...? - How does the performance vary with different definitions of the sink token? Would using a different token (not the first position) as the reference point yield different or better results? - The fixed token selection ratio (33%) seems arbitrary. Why not implement an adaptive threshold based on the orthogonality distribution within each layer? - For more real tasks, how does OrthoRank impact generation quality metrics beyond perplexity, such as coherence, factuality, or hallucination rates? - How does OrthoRank behave with extremely long context lengths where the sink token's influence might be significantly different? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful and constructive review. Below we provide responses regarding the specific questions you raised. We hope this analysis addresses your concerns and welcome any further feedback. ### **Q1: Orthogonality as a Token Importance Metric** The rationale behind using orthogonality as a criterion is discussed in detail in Q1 from the first reviewer, gDdH. Please refer to it for further clarification. From an implementation perspective, alternative metrics such as attention scores are less practical. Computing them for unselected tokens introduces overhead, defeating the purpose of selective computation. Moreover, flash attention does not expose intermediate values, further limiting feasibility. Gradient-based methods are also expensive, and semantic significance lacks a clear definition, requiring further exploration. ### **Q2: Sink Token Definition** We experimented with the Llama-2-7B model, as [0] notes that attention sink can also occur at the first strong delimiter (e.g., “.” or “\n”), allowing us to examine the impact of different reference points.  | Reference| Llama-2-7B (ppl / acc)| |-|-| | **first position**| **10.04** / **60.35**     |  | first ".” or “\n” | 10.61 / 59.66  |  Alternative reference points showed similar or slightly lower performance. This is expected, as the cosine similarity between sink tokens at different reference points was very high (over 0.95), resulting in similar selected tokens. The slightly lower performance is likely due to errors caused by unparsed strong delimiters. Since the conditions for sink token vary by model, we recommend using the first position as a reference point. ### **Q3: Dynamic Token Selection Ratio** We believe that your idea represents a crucial and necessary approach. However, optimizing layer selection with varying token selection ratios would greatly expand the search space and require further research. Therefore, we tested a simpler approach by maintaining an average sparsity equal to the fixed 33% ratio through a linear variation in the token selection ratio according to layer depth. ### **Experimental Setup** We conducted experiments using the Llama-2-13B model with a target sparsity of 20%. The selected layers for token selection were as follows:   Selected layers for 20% sparsity: [8, 9, 10, 11, 12, 22, 25, 27, 29, 31, 33, 34] We evaluated two different strategies for the **token selection ratio per layer**: **Linear1: Selecting more tokens in deeper layers**    - Token selection ratio:  [0.0, 0.061, 0.121, 0.182, 0.242, 0.303, 0.364, 0.424, 0.485, 0.545, 0.606, 0.667] **Linear2: Selecting more tokens in shallower layers**    - Token selection ratio:  [0.667, 0.606, 0.545, 0.485, 0.424, 0.364, 0.303, 0.242, 0.182, 0.121, 0.061, 0.0] ### **Results**  | Method| ppl / acc| |-|-| | **Fixed ratio (0.33)**| **8.74** / 66.99     |  | Linear 1 | 9.10 / 63.74             |  | Linear 2 | 8.97 / **68.18**             |  Experimental results showed that the fixed ratio setting achieved better perplexity, while selecting more tokens in shallower layers led to higher accuracy. This demonstrates that even with a simple approach, performance improvements are possible. We believe that determining a dynamic token selection ratio, building on these results, would be a promising research direction. ### **Q4: Impact on Generation Quality Metrics** To address concerns about generation quality, we conducted additional experiments using the TruthfulQA benchmark, which evaluates truthfulness and the ability to avoid hallucinations. We evaluated two different metrics: MC1: Measures accuracy by selecting the highest log-probability answer among choices. Generation (BLEU): Measures truthfulness by comparing the generated response with the ground truth, while also considering informativeness to avoid non-informative answers. | Method | Llama-2-13B (mc1/ gen) | Llama-3-8B (mc1/ gen) | Mistral-7B (mc1/ gen) | Mixtral-8X7B (mc1/ gen) | |-|-|-|-|-| | SLEB  | 21.2 / 20.82     | 19.8 / 4.17     | 21.3 / **21.6**    | 24.1 /**27.1** | | +OrthoRank | **22.3** / **23.6**             | **21.6** / **15.28**       |  **23.6** / 20.17            | **25.2**/ 26.85       |   The results demonstrate that OrthoRank improves factual accuracy compared to the baseline SLEB model in most cases. Generation examples can also be found in Appendix H. ### **Q5: Performance with Extremely Long Contexts** Figure 3, Table 8 in [1] shows that StreamingLLM, which caches initial tokens, consistently outperforms other methods even with long contexts, highlighting the significant influence of the initial sink token. Similarly, Section 4.5 demonstrates that OrthoRank achieves strong performance, indirectly suggesting that the sink token’s influence remains stable even with longer context lengths. --- [0] Massive activations in large language models, COLM 2024 [1] Efficient Streaming Language Models with Attention Sinks, ICLR 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of my concerns have been addressed. For now, I will keep my score, and I will also pay attention to the authors' discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful response. I truly appreciated your detailed feedback—on the token importance criterion, the definition of sink tokens, the adaptive method, generation quality, and the handling of extremely long contexts—which helped us highlight OrthoRank’s strengths from multiple perspectives and improve the overall quality of the paper. We are also pleased to note that **most of your concerns have been addressed**. To provide a **more complete response to Q1**, we include **empirical comparisons** against **attention-based selection** to further support our use of orthogonality beyond its theoretical motivation. The results are shown in the following table. | Method |Sparsity | Throughput imporv. | Llama-2-13B (ppl / acc) | Llama-3-8B (ppl / acc) | Mistral-7B (ppl / acc) | Mixtral-8X7B (ppl / acc) | |-------|-|-|-------------------------|-------------------------|-------------------------|--------------------------| | **Orthogonal ↑** |20% | **1.18x**| **8.74** / **66.99** | 14.95 / **60.84** | **11.54** / **63.88** | 9.39 / **72.52** | | Attention ↑ | 20% | 0.71x| 8.90 / 63.33 | **14.85** / 57.76 | 11.60 / 56.81 | **9.01**/ 66.73 | - To enable the attention-based token selection baseline, we had to use **eager attention** specifically for that method. - While attention-based selection sometimes shows lower perplexity depending on the model, it consistently results in **significant drops in zero-shot accuracy**. - Moreover, we emphasize that **computing attention scores** for all tokens, including **those not selected**, introduces **overhead** and makes the method **incompatible** with **fused-kernel implementations** such as FlashAttention or SDPA, thereby undermining the efficiency gains expected from selective computation. --- Thank you once again for your time, effort, and valuable contributions. Best regards, Authors
Summary: This paper joins the rank of other works that are concerned reducing LLM inference costs. The authors start with the observation that the cosine similarity between the hidden states of the sink token and other tokens increases, the deeper in the model we are, despite stationary sink hidden states. Based on that observation, the authors propose OrthoRank which prioritizes the computations of tokens whose hidden states are roughly orthogonal to that of the sink tokens. Claims And Evidence: The authors make two empirical claims: first that the hidden states of non-sink tokens converge in cosine similarity to the sink token, and second that their approach OrthoRank, derived from latter observation, results in lower perplexity, higher accuracy with comparable throughput. I find the experiments to support all of the above claims. Methods And Evaluation Criteria: yes Theoretical Claims: not applicable Experimental Designs Or Analyses: I checked all of the experimental details Supplementary Material: Skimmed Relation To Broader Scientific Literature: I think this paper will have an impact in decreasing the inference cost of LLMs as it propose an off-the-shelf approach to saving computation by ignoring tokens that might not need to be updated. Essential References Not Discussed: None that I'm aware of Other Strengths And Weaknesses: - I think the paper is well written and was easy to follow. Moreover, I think the idea is intuitive and I expect it to see a fair share of adoption. That being said, I was left wondering about some experimental details that I would like to ask about: 1. Starting line 237, what model is being used here? and I'm guessing this was performed at every layer? 2. Could you give more details about how you're using OrthorRank with selective layers? How exactly are the layers being selected? 3. In section 4.2, am I to understand that Wikitext-2 was used as a sort of validation dataset? Does your approach require access to a validation set during inference? 4. I was expecting to see comparisons against other methods, including methods that perform token pruning but also approximate attention approaches which, while different in their approach, aim to achieve the same goal as layer pruning and token pruning approaches. Other Comments Or Suggestions: - (nitpicking) I would suggest modifying Figure 2 as it is currently illegible on paper due to the very small font size Questions For Authors: Please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your supportive review and thoughtful suggestions. We greatly appreciate your positive feedback on the clarity, intuition, and practical impact of OrthoRank. Below we provide responses regarding the specific questions you raised. We hope this analysis addresses your concerns and welcome any further feedback. ### **Q1: Experimental Details (Line 237)** Thank you for pointing it out. The results in line 237 are based on the Llama2-13b model. Token selection was performed at a single layer, and the change in the final output was measured. This process was repeated for each layer to evaluate our criterion. Additional results from other models can be found in Appendix B. We will update the revised manuscript accordingly. Thank you again for your careful review. ### **Q2: Selective Layer Usage Clarification & Q3: Use of Wikitext-2 Dataset (Section 4.2)** Similar to selecting layers for pruning in layer pruning methods, we progressively replace layers with OrthoRank-applied layers, identifying the layers that exhibit minimal performance degradation (as described in Section 3.2). During this process, we can also utilize an iterative approach, such as SLEB [0], which is represented as "SLEB + OrthoRank" in the experimental results. To assess the impact of applying OrthoRank, a validation set is required, and we used Wikitext for this purpose. Since the layer selection process is conducted offline, there is **no need to access the validation set during actual inference**. In accordance with the evaluation protocol [0], we ensured that performance comparisons were conducted on C4 instead of Wikitext to prevent information leakage from the layer selection process. ### **Q4: Broader Method Comparisons** Approximate attention methods focus on selecting a subset of keys and values to approximate the output of full attention computation. In contrast, as outlined at the end of **Section 3.2**, our proposed OrthoRank reduces **the number of queries**, thereby not only lowering attention computation but also **reducing the input size for the feed-forward network (FFN)**, effectively decreasing the overall computational cost (token selection ratio). Thus, OrthoRank and approximate attention methods can be used together rather than being direct alternatives. We will clarify this distinction in the revised version of our paper. ### **Figure 2 Readability** Thank you for your note on readability issues with Figure 2. We will improve its visibility by adjusting font sizes and layouts in our updated manuscript. --- [0] SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks, ICML 2024
Summary: The paper introduces a novel dynamic token selection method, OrthoRank, aimed at improving the efficiency of large language model (LLM) inference with fewer computation especially for long context. The authors observe that for some models as layers deepen, the cosine similarity between the normalized hidden states of the sink token and other tokens increases, while the sink token's normalized hidden states remain largely unchanged. Based on this, OrthoRank selects tokens that are more orthogonal to the sink token and assigned greater importance. The authors show that for a previous method STEB or Shortened LLaMA, OrthoRank can improve its performance on longbench. Claims And Evidence: (1) The nature of sink token: well supported by various size models; (2) Connecting sink token to token selection: somehow blur, why the orthogonality can be generally used for token selection? It is true that sink tokens tend to be static and finding different representation can bring diversity. However, tokens orthogonal to sinks may also contain excessive noise that should be learned to be eliminated. Are there any constraints needed instead of pursuing pure orthogonality? Methods And Evaluation Criteria: Generally the datasets involved are sufficient to evaluate the methods. However, the comparison with existing layer pruning methods only considers SLEB and Shortened LLaMA. There are way more methods such as H2O, SnapKV, etc. The comparison can cover more methods. Theoretical Claims: The paper includes a theoretical derivation to support the token selection criterion based on cosine similarity, which is mainly based on cosine similarity of sink token and other tokens. The intuition makes sense, while the question is about the connection to token selection. Experimental Designs Or Analyses: The evaluation as well as visualization looks sufficient to characterize the method well. Supplementary Material: I have read through the SM part. Relation To Broader Scientific Literature: The reviewer does not find significant broader scientific contribution. Essential References Not Discussed: This paper generally cites references well. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Please see the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and constructive criticism. We appreciate your careful consideration of our work and your valuable suggestions for improvement. Below we address the concerns you raised and outline how we will incorporate your feedback: ### **Connecting sink token to token selection** We agree that orthogonality can promote diversity and acknowledge the potential for noise. However, our approach is based on the observation that, as tokens propagate through the layers of an LLM, they naturally become more **aligned with the sink token**, increasing their **cosine similarity** (Figure 2, L209-L211). We interpret this growing alignment through layers as an indicator that a token is **on the path of being updated** (Section 3.1). Therefore, we consider tokens with **faster alignment**—i.e., **higher speed**—as more important (L31-L33). Rather than pursuing orthogonality purely for diversity, we use it as a **proxy for speed** and leverage this dynamic for token selection. In this context, orthogonality is not just for diversity but reflects the natural convergence behavior of tokens. Our experiments show that this approach improves performance without additional training. We appreciate your valuable input and will explore incorporating additional constraints—such as semantic relevance or inter-token relationships—beyond pure orthogonality, to develop a more robust token selection strategy. Please kindly refer to Q1 with Reviewer gDdH and Q1 with Reviewer SyT3 if you're also interested in comparisons with other criteria. We will provide a clearer explanation and include the relevant discussion in the revised version of the paper. ### **Comparison with More Methods** H2O and SnapKV are algorithms designed for managing KV caches, which belong to a different research domain compared to OrthoRank. Although our approach differs as we calculate KV even for unselected tokens, it is possible to apply these algorithms to OrthoRank. We believe that by reducing the number of forwarded tokens through OrthoRank and minimizing KV cache using techniques like H2O and SnapKV, the overall efficiency of LLM inference could be significantly improved. We will clearly address the differences between our approach and these studies in the revised manuscript.
Summary: The paper introduces OrthoRank, a new method for selecting important tokens in Large Language Models (LLMs) to improve inference efficiency. The method is based on the observation that in LLMs, after the attention sink occurs, the cosine similarity between the normalized hidden states of the sink token and other tokens increases as layers deepen, while the sink token's hidden states remain relatively unchanged. OrthoRank selects tokens based on their orthogonality to the sink token, prioritizing tokens that are more orthogonal for updates. The authors claim that OrthoRank achieves lower perplexity and higher zero-shot accuracy compared to layer pruning methods at the same sparsity ratio, with comparable throughput, and superior performance on LongBench. Claims And Evidence: The cosine similarity analysis and the behavior of sink tokens are well-supported. The authors present detailed results in Section 2 and Appendix B, with clear visualizations (Figures 2, 3, 9, and 10) that corroborate their observations regarding the cosine similarity between the sink token and other tokens, and the relatively unchanged state of the sink token across layers.   The effectiveness of OrthoRank is demonstrated through various experiments. The authors compare OrthoRank with layer pruning methods and show that it achieves better performance in terms of perplexity, zero-shot accuracy, and performance on LongBench. The ablation studies further validate the design choices of OrthoRank, such as the token selection criteria and the importance of KV calculations.   The trade-offs between throughput and performance are analyzed. Figure 6 and Figure 14 illustrate the relationship between throughput improvements and perplexity, showing that OrthoRank achieves comparable or better performance than layer pruning methods while maintaining a throughput increase nearly proportional to sparsity.   While the paper demonstrates OrthoRank's superior performance compared to layer pruning, the discussion around the limitations of layer pruning could be more nuanced. The paper states that layer pruning methods "do not effectively reflect the specific characteristics of the input tokens" and that they may result in "abrupt performance degradation". While this is true, layer pruning is a well-established and effective technique for LLM efficiency. A more balanced discussion acknowledging the strengths and weaknesses of both approaches would provide a more comprehensive view. Methods And Evaluation Criteria: Proposed Methods: The proposed method, OrthoRank, is designed to improve the efficiency of Large Language Model (LLM) inference by selecting important tokens and bypassing computations for less important ones. This is a relevant goal in the context of LLMs, where computational cost is a significant challenge. The method leverages the concept of the "attention sink" and introduces a novel approach to token selection based on the orthogonality of tokens to the sink token. This is a reasonable approach, as it tries to exploit the internal mechanisms of LLMs to achieve efficiency gains.   Evaluation Criteria: The paper uses perplexity and zero-shot accuracy as key evaluation metrics. These are standard and widely accepted metrics for evaluating language models, making them appropriate for the task. The authors also evaluate their method on the LongBench benchmark, which is designed to assess the performance of models on long-context understanding. This is particularly relevant given the challenges LLMs face with long sequences. The paper includes ablation studies to analyze the impact of different components of their method, such as token selection criteria and the use of KV calculations. This is a good practice to validate design choices and understand the contribution of each component. Theoretical Claims: The paper includes a section (Section 3.1) that provides a derivation for its token selection criteria. Here's a breakdown of my assessment: The authors aim to define token importance based on the change in cosine similarity with the sink token. They start by expressing the cosine similarity and computing its gradient with respect to the hidden state of a token. The derivation seems correct.   To simplify the importance metric, the authors make an assumption that normalized hidden states have approximately equal norms (except for the sink token). This assumption is supported by Figure 12 in Appendix C, which shows that the norms of hidden states are indeed approximately equal.   Based on this assumption, the authors simplify the expression and show that the importance of a token is related to how small the cosine similarity between that token and the sink token is. The algebraic manipulations in Appendix C appear to be correct.   Overall, the derivation of the token selection criteria seems to be logically sound and the assumption made is supported by empirical evidence. Experimental Designs Or Analyses: The analysis is thorough and well-presented. The authors used appropriate metrics (perplexity, zero-shot accuracy, accuracy on LongBench) to evaluate the performance of OrthoRank. One potential area where the analysis could be enhanced is the discussion around the limitations and trade-offs of OrthoRank. While the authors compare OrthoRank with layer pruning, a more in-depth analysis of scenarios where layer pruning might be more suitable or efficient would provide a more balanced perspective. Supplementary Material: Yes, all supplementary material Relation To Broader Scientific Literature: Attention Sink Analysis: The paper builds upon existing research on the "attention sink" phenomenon in LLMs. It references the initial discovery of the attention sink by Xiao et al. (2024), which highlighted how the initial token in a sequence often receives disproportionately high attention. It also acknowledges further explorations of this phenomenon and techniques to calibrate or leverage it for improved LLM efficiency. The authors expand on this by analyzing the cosine similarity between the sink token and other tokens in hidden states, which they claim is a novel approach. Token Selection Methods: The paper's contribution to token selection is related to prior work in token pruning and dynamic token selection. It contrasts its approach with token pruning methods that progressively drop tokens across layers.   It also positions its work in the context of dynamic token selection and early exit mechanisms, which aim to improve efficiency by selectively processing tokens. The key difference is that OrthoRank selects tokens based on their orthogonality to the sink token, without requiring additional training or modules.   Efficiency in LLMs: The paper addresses the broader challenge of improving the efficiency of Large Language Models (LLMs), which is a significant area of research. It discusses layer pruning as a common technique for reducing computational costs. It contrasts OrthoRank with layer pruning, highlighting its ability to provide more fine-grained control over computational efficiency by selecting tokens within layers. Essential References Not Discussed: Nope Other Strengths And Weaknesses: Strengths: Novelty: Introduces OrthoRank, a new token selection method based on token-sink orthogonality, and provides new insights into the attention sink phenomenon. Significance: Addresses the critical problem of high computational cost in LLM inference, contributing to the development of more efficient LLMs. Clarity: Well-written and easy to follow, with clear explanations, figures, and supplementary materials. Weaknesses: Scope of Analysis: Primarily focuses on the relationship between the sink token and other tokens, with limited analysis of relationships among non-sink tokens. Assumption of Equal Norms: Relies on the assumption that normalized hidden states have approximately equal norms, which could benefit from further theoretical justification. Generalizability: Primarily demonstrates effectiveness on autoregressive models; generalizability to other LLM types could be explored further. Other Comments Or Suggestions: The authors may want to clarify the positioning of their work with respect to other token selection methods. While they mention that their method does not progressively drop tokens across layers, like some token pruning methods, the distinction could be further emphasized. For example, OrthoRank could be characterized as a method that performs token selection within a layer, maintaining the full sequence length across layers, but reducing computation at selected layers. Questions For Authors: Token Selection Rationale: Can you provide more insight into why orthogonality-based token selection is more effective than alternatives like attention scores or hidden state magnitudes? Layer Pruning Limitations: Could you elaborate on the trade-offs between OrthoRank and layer pruning, discussing when layer pruning might be preferred? Generalizability: How can OrthoRank be applied to LLMs beyond autoregressive models, such as encoder-decoder models? Hyperparameter Tuning: Please provide more guidance on tuning OrthoRank's hyperparameters (e.g., token selection ratio, layer selection) for optimal performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your detailed and constructive feedback. We truly appreciate the considerable time and effort you invested in evaluating our paper and your recognition of OrthoRank’s novelty. Below we clarify the points you raised. ### **W2: Assumption of Equal Norms** OrthoRank is based on (scaled) normalized hidden states obtained via RMSNorm, which first normalizes each token vector to unit RMS norm and then applies a learned element-wise scale: $$ \mathbf{h}_i^{\text{scaled}} = \mathbf{g} \odot \left( \frac{\mathbf{h}_i}{\text{RMS}(\mathbf{h}_i)} \right) = \mathbf{g} \odot \mathbf{h}_i^{\text{norm}} $$ This ensures all tokens start with equal norm, after which **learned scaling** introduces norm differences. Interestingly, the scaling vector **g** has **very low variance (~0.001)**, effectively acting as a near-constant scalar. For **non-sink tokens**, cosine similarity between scaled and normalized hidden states remains high (≥ 0.95), indicating that scaling behaves approximately as **scalar multiplication**: $$ \cos(\theta) \approx 1 \quad \Rightarrow \quad \mathbf{h}_i^{\text{scaled}} \approx \alpha \mathbf{h}_i^{\text{norm}} $$ Since scalar multiplication preserves norm ratios, the scaled hidden states of non-sink tokens maintain near-equal norms: $$ \|\mathbf{h}_i^{\text{scaled}}\| \approx \alpha \cdot c \approx \|\mathbf{h}_j^{\text{scaled}}\| \quad \text{for all } i, j \text{ in non-sink tokens} $$ In contrast, sink tokens show a significant drop in similarity (~0.6), likely due to **energy compaction** from **massive activation [0]** in a few dimensions. This makes them more sensitive to suppression, even if **g is nearly constant**. Therefore, given the **uniformity of g**, it is reasonable to assume scaled hidden states of nom-sinks have approximately equal norms. ### **W3 & Q3: Generalizability** To test generalizability beyond autoregressive models, we evaluated OrthoRank on the encoder-based BERT-base-uncased. Attention sinks were observed, most often at [SEP], but also at [CLS] or '.' depending on the layer. However, unlike in autoregressive models, cosine similarity with [SEP] did not show a consistent increase. Thus, OrthoRank is not directly applicable to encoder models, and adapting it may require handling layer-wise variation in sink tokens—a direction we leave for future work. ### **Q1: Token Selection Rationale** We consider an increase in **cosine similarity** with the sink token as an indication that a **token is on the path of being updated**. (Figure 2, L209-L211) Therefore, as tokens propagate through the layers, it is most efficient to update those that exhibit **faster movement**. Based on this intuition, we define the **speed** of a token as its importance. Accordingly, **orthogonality**—interpreted as speed—is used as a criterion for token selection. The attention score-based KV cache eviction treats tokens as sources of information, aiming to retain those with high attention to surrounding tokens and preserve the attention matrix before and after eviction. While **high attention scores indicate strong influence on other tokens**, they hold **little meaning** from the **updating token’s perspective**. We lacked insight into using the hidden state norm for token selection, so we conducted experiments to evaluate its effectiveness.  | Method | Llama-2-13B (ppl / acc) | Llama-3-8B (ppl / acc) | Mistral-7B (ppl / acc) | Mixtral-8X7B (ppl / acc) | |-|-|-|-|-| | Ours | **8.74** / **66.99**| **14.95** / **60.84** | **11.54** / 63.88| **9.39** / **72.52** | | Norm ↑ | 9.46 / 62.94| 16.23 / 59.46|12.26 / 59.97|9.39 / 70.89| | Norm ↓ | 9.15 / 64.47| 16.73 / 60.13|21.12 / **63.99**|9.78 / 66.50| Our method consistently performs well across models, achieving top or comparable results in both perplexity and accuracy. While Mistral-7B slightly outperforms in zero-shot accuracy using the high-norm method, it suffers from higher perplexity, potentially harming performance elsewhere. This highlights the robustness of our approach. ### **Q2: Balanced Discussion on Layer Pruning** We agree that layer pruning is a well-established and effective approach for improving LLM efficiency. As noted, both OrthoRank and layer pruning have distinct strengths and limitations. Layer pruning is particularly advantageous in scenarios with limited storage or where small batch sizes make weight transmission a bottleneck, such as on-device AI. In contrast, when such constraints are absent, OrthoRank—which loads full weights but processes only a subset of tokens—can be more suitable. Each method, therefore, has its own context-dependent benefits. ### **Q4: Hyperparameter Tuning** We recommend setting the token selection ratio between 0 and 0.5 (Section 4.6.4). After determining the desired total sparsity, the layer selection ratio is automatically calculated. We recommend setting the total sparsity to be below 40% (Appendix G) --- [0] Massive activations in large language models, COLM 2024
null
null
null
null
null
null
Can Transformers Reason Logically? A Study in SAT Solving
Accept (poster)
Summary: This paper investigates whether transformers can solve 3-SAT problems by using COT reasoning to simulate backtracking-based search algorithms like DPLL. The authors theoretically demonstrate that this approach is feasible. Empirical evaluations show that: 1) language models using COT can be trained on reasoning paths to perform deductive reasoning effectively, and 2) the trained models struggle to generalize to problems with unseen sizes, such as different numbers of variables. ### Update after Rebuttal Thank you to the authors for the response and clarifications. Despite the rebuttal, I still share the concerns of other reviewers regarding the lack of convincing evidence for the method's applicability in industry-scale settings. This limitation currently narrows the work's potential impact. Consequently, I am maintaining my initial score. Claims And Evidence: The claims are generally well-described within the context and supported by both theoretical and empirical evidence. The authors provide a formal proof that transformers can solve 3-SAT via COT reasoning, and demonstrate its correctness empirically. Their experiments show strong in-distribution generalization but limited generalization to different input lengths. However, the authors could elaborate on the relationship between their theoretical construction and how LLMs learn in real-world environments, as their construction requires specific weight configurations that may not naturally emerge during training procedures. Methods And Evaluation Criteria: The experimental setup suitable for the research questions. While the Turing completeness of chain-of-thought reasoning guarantees that a model can be constructed (likely with predefined parameters) to simulate DPLL, the more practical question is whether a model can learn to perform such operations from data (i.e., reasoning paths). The authors effectively designed and conducted experiments to investigate this question. Theoretical Claims: While I have reviewed but not verified the proofs in detail, their construction appears sound. It's noteworthy that the authors managed to introduce parallelism into the standard backtracking procedure, which is a key difference between COT and traditional rule-based reasoners. The parallel processing capability inherent in LLMs offers potential advantages over sequential rule-based systems. Experimental Designs Or Analyses: They are relevant to the research problem. The authors designed their evaluation datasets to avoid statistical shortcuts, enhancing the credibility of their results. Supplementary Material: I skimmed through the appendix (i.e., pages 12-38) of the PDF. I appreciate the authors' effort in providing comprehensive details about their implementation. Relation To Broader Scientific Literature: However, in my opinion, the ideas and findings have limited novelty, as they are, in some ways, a straightforward implication of existing literature. Since the Turing completeness of COT has been well discussed in previous research, it is not surprising for the community to see that COT can simulate a backtracking procedure like DPLL. From this perspective, I doubt whether this paper can contribute significantly to the research community. In Section 2, the authors argue that their method requires a "drastically reduced number of COT tokens." It seems not to be a fair comparison, as existing literature focuses on single-tap TMs, which are typically used as a basis for theoretical analysis rather than actual tine-grained complexity analysis. Theoretically, it is reasonable to expect that the numbers can be reduced when considering the properties of COT, the 3-SAT problem, and DPLL. However, this advantage is unlikely to extend to practical use cases, as using COT to simulate DPLL would introduce much overhead compared to standard rule-based reasoners. The experiments design largely overlap with other "learning to reason" experiments in previous literature, and the results are generally intuitive. The authors fail to provide an in-depth analysis of how the learned model relates to their theoretical construction. Essential References Not Discussed: No. Other Strengths And Weaknesses: None Other Comments Or Suggestions: In my opinion, the preliminary section could be rearranged, as readers at ICML are likely to be relatively familiar with transformers and can easily find additional materials if needed. Most of the content, particularly the formulas related to transformer blocks, is not utilized in subsequent sections. Instead, I recommend including a brief introduction to backtracking-based search algorithms, such as DPLL, which is used in the paper, to enhance readers' familiarity with the topic. The authors may also consider elaborating on the COT aspect as well. Questions For Authors: 1. For the results presented in Section 6.2, the authors argue that the failure to generalize to arbitrary lengths is consistent with their theoretical result, which states that the size of the transformer depends on the number of variables. However, this does not align well with the experimental design. The authors only experimented with varying problem sizes, rather than varying the sizes of the transformers themselves. Could the authors clarify this discrepancy? 2. How does the efficiency of the transformer-based SAT solver compare to specialized SAT solvers in terms of computational resources and solving time? Or, how might the proposed method collaborate with existing SAT solvers to enhance SAT solving? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We’re grateful for your acknowledgment of the soundness of our construction and the relevance of the experiments. We would like to focus on your primary concern regarding novelty compared to Turing Completeness (TC) results. While we agree that TC of Transformers with CoT implies that, in principle, simulating algorithms like DPLL is possible, we believe this perspective overlooks the core contributions and significance of our work. Allow us to elaborate: ## Beyond Turing Completeness: The "How" Matters Our work, like previous works on Transformer expressiveness, aims at advancing our understanding of the capabilities and mechanisms of Transformer models in reasoning. Previous TC results rely on step-by-step simulations of single-tape Turing Machines (TM), which are theoretically foundational but don't well-explain how logical reasoning is performed internally in Transformers. I.e., Many reasoning phenomena of LLMs remain poorly explained by TC alone. Consider an analogy: Mathematical theorem proving (e.g., solving IMO problems) can be framed as search problems over LEAN proofs. If LLMs are TC, does this make their success on math benchmarks unsurprising or non-novel? We argue no. The interest lies in how LLMs solve these problems – potentially using more efficient, structured reasoning mechanisms than TM simulation. Our work moves beyond TC by asking how a Transformer can efficiently simulate a specific, non-trivial logical reasoning procedure (DPLL for 3-SAT) using its inherent architectural features. We provide an explicit construction and demonstrate a concrete CoT mechanism. ## Novelty in Parallelism via Attention As highlighted in your review, a key novelty is demonstrating how Transformers use parallelism effectively. **We provide the first theoretical evidence that the attention mechanisms in Transformers can support parallelism for logical reasoning.** This suggests that the Transformer models can be particularly suitable for deductive logical reasoning in large contexts. In particular, Lemma 4.8 shows that a single Transformer head can perform satisfiability checking, conflict detection, and unit propagation, respectively overall all clauses in parallel. ## Novelty in CoT Efficiency and Structure While the reduction in CoT tokens compared to TM simulations might seem “reasonably expected”, anticipated theoretical results still significantly benefit from formal proofs—analogous to widely believed mathematical conjectures (e.g., P ≠ NP). Our CoT simulates DPLL at an abstraction level higher than TM emulation, explicitly representing logical reasoning steps of assumption, deduction, and backtracking. Furthermore, we formally provide an explicit upper bound (p·2^(p+1)) on CoT length (Theorem 4.5), a contribution extending beyond mere TC results. In particular, such upper bounds for TC results are unknow for 3-SAT. ## Preliminary Section Clarity Regarding your suggestion on swapping the preliminaries section for background on 3-SAT and DPLL, this is indeed a greatly helpful suggestion for readability, and we will certainly do that during revision. In particular, we will swap the preliminary section on transformers with Appendix C.1 on 3-SAT after rewriting for brevity. ## Questions The main connection between the experimental results is the shared Chain-of-Thought design. Our theoretical construction provides a specific CoT structure. The experiments then investigate whether Transformers can learn this structure from data. The strong intra-length OOD generalization (Table 1) suggests that training on this theoretically-grounded CoT does allow the model to learn a robust reasoning procedure applicable across different data distributions, overcoming limitations seen in prior work where models trained with CoT relied on statistical shortcuts [1]. We agree that additionally investigating the accuracy of models of different sizes trained on 3-SAT problems with Chain-of-Thought is an interesting and insightful direction of experimental investigation according to the $O(p^2)$ scaling of our theorems and aligns with the “scaling law” line of research. However, such experiments are computationally beyond our theoretical focus and available resources. As you mentioned in the review, Transformer models would introduce significant overheads compared to modern SAT-solvers. Having that said, it may be possible to develop efficient parallelized SAT-solver operations such as unit propagation, satisfiability checking, and conflict detection on GPU architectures using the insights from Lemmas 4.7 and 4.8. Our construction may also be used as a differentiable SAT-solving component that allows LLMs to formulate the underlying logic within natural language reasoning traces as SAT formulas and use traditional SAT solvers to perform more reliable reasoning. [1] Zhang, et al. On the paradox of learning to reason from data. IJCAI '23
Summary: This paper studies the deductive logical reasoning capabilities of decoder-only Transformers. As claimed by the authors, many researchers reject the idea that LLMs are capable of reasoning, and there is little understanding of the fundamental limitations in the reasoning abilities of Transformer models. In this work, the authors claim that they prove by construction that decoder-only Transformers can solve 3-SAT in a non-uniform model of computation, and the instantiated Transformer corresponding to the theoretical construction can perfectly solve 3-SAT instances. The authors conduct experiments to evaluates the model’s performance and ability to generalize to formulas with a different number of variables than seen during training. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Proposed method make sense for the problem. But for the instances in the experiment, the number of variables is small. Theoretical Claims: Yes. Experimental Designs Or Analyses: The experimental design to evaluate the effectiveness and generalizability of the method is reasonable. Supplementary Material: All appendices are reviewed. Relation To Broader Scientific Literature: The paper discusses and proves the logical reasoning capabilities of decoder-only Transformers in the 3-SAT problem, which provides reference for the view that "LLM can reason". Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ## Strengths: 1. The paper is generally well-written, with clear explanations. 2. The code of the method is available. 3. Experimental results are satisfactory. ## Weaknesses: 1. To verify the logical reasoning capabilities of decoder-only Transformers, It would be good to add research on other types of SAT problems. Other Comments Or Suggestions: N/A. Questions For Authors: Actually, SAT is so facinating because of its usefulness in real-world applications. Hence, it is more important to adopt industrial instances (e.g., those instances from SAT Competitions) in your study, rather than random instances. This significantly reduces the signficance of this work. Beides random instances, how does the method perform on industrial instances (e.g., those instances from SAT Competitions)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your devoted time in reviewing the paper. We're glad that you found the writing and experimental results of our work satisfactory. > To verify the logical reasoning capabilities of decoder-only Transformers, It would be good to add research on other types of SAT problems. The suggestion is highly insightful! Including more complicated SAT instance structures beyond CNF formulas may reveal further insights into more complicated Transformer reasoning, especially when processing hierarchical data. This is currently beyond the scope of this work, and we would certainly look into these directions in future work. For the present work, since it’s well known that all SAT formulas can be converted into 3-SAT in polynomial time, and we are the first theoretical work investigating the capacity of Transformers in formal logical reasoning, we believe that our current contributions are sufficient for the current manuscript. > Beides random instances, how does the method perform on industrial instances (e.g., those instances from SAT Competitions)? Regarding your question, we fully acknowledge the importance and impact of practical SAT solving, but we would like to clarify that **our goal is to advance our theoretical understanding of the reasoning capability of Transformer models. Practical SAT solving is orthogonal to this work and our goal.** We chose 3-SAT as the theoretical basis for our study because it’s a fundamental NP-complete problem in complexity theory that represents deductive logical reasoning. It is unlikely that our model will have any practical advantage over traditional SAT solvers. The DIMACS encoding and CoT of practical instances are significantly longer than the context lengths of our models. Therefore, we did not use any practical SAT benchmarks for evaluation. We’re happy to answer any additional questions you may have regarding our work. We also sincerely hope that you can evaluate the theoretical aspects of contributions in more detail if possible.
Summary: This paper investigates the deductive logical reasoning capabilities of decoder-only Transformers. The author(s) opt for 3-SAT problem as a representative example of logic reasoning task and use Transformer model to achieve reasoning via Chain-of-Thought (CoT). Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Theorem 4.5 and Theorem 1.1 cannot hold based on the experimental results, where the Transformer models fail to generalize to instances with more than 12 variables. Experimental Designs Or Analyses: All experiments are conducted on very small-scale instances, where the number of variables in the 3-SAT problems is fewer than 20. Supplementary Material: Yes, I review the appendix. Relation To Broader Scientific Literature: This paper is the first work to theoretically analyze the ability of Transformer models to solve 3-SAT problems. It is related to the literatures about AI/LLM reasoning. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper provides a thorough theoretical analysis of the ability of Transformer models to solve 3-SAT problems, offering valuable insights into their logical reasoning capabilities. 2. The approach of tokenizing 3-SAT problems and employing a decoder-only Transformer to solve them is novel. Weaknesses: 1. Lack of formal justification for model hyperparameters (L=7, H=5) Theorem 4.5 asserts the existence of a decoder-only Transformer with L=7 layers and H=5 heads capable of solving 3-SAT. It seems that Appendix C.6 is the corresponding proof. However, Line 1358 seems like a hypothesis of the function of the embedding layers. Is there any theoretical proof of this model configuration or even ablation study of each layer? 2. Mismatch between theorem and empirical results. Theorem 4.5 claims universality ("for any p, c ∈ N+"), but the experiments only validate the construction on small instances (p ≤ 20 variables). The results show that Transformer models cannot be generalized to the instances with more than 12 variables. Other Comments Or Suggestions: The main body of this paper is well-organized, but the appendix is not clear. For example, while Appendix C.6 appears to provide a detailed analysis of Theorem 4.5, this link is not explicitly stated in the main text. The additional experiments in Appendix B are not clear. Questions For Authors: 1. please clarify weakness 1 and 2 2. The Transformer model also fails to achieve 100% accuracy on small instances (p ≤ 20), whereas traditional SAT solvers (e.g., DPLL, CDCL) solve such problems with perfect accuracy. Does it suggest that self-attention-based models are inherently limited in practical scenarios? Ethical Review Concerns: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly thank you for the detailed comments and feedback. We appreciate the reviewer’s careful reading and insightful concerns. We seek to clarify certain potential logical misunderstandings, hoping to address your identified concerns regarding the theoretical results. Most importantly, **the correctness of Theorem 4.5 is justified by the proof in Appendix C, rather than the experimental results.** > Theorem 4.5 and Theorem 1.1 cannot hold based on the experimental results, where the Transformer models fail to generalize to instances with more than 12 variables. > Mismatch between theorem and empirical results. Theorem 4.5 claims … We fully appreciate the reviewer’s detailed review. However, we kindly clarify that Theorem 4.5, which shows the existence of a Transformer model that solves SAT, refers explicitly to the Transformer weight configuration rigorously defined in Appendix C. The empirical models, which showed limited generalization beyond 12 variables, were trained from data and thus had fundamentally different weight configurations. The configuration corresponding to Theorem 4.5 and described in Appendix C.6 is also implemented and corresponds to the "Compiled" results presented in Figure 3 and Section 5, achieving perfect accuracy on all tested instances. Again, we note that while such perfect accuracy is implied by Theorem 4.5, no amount of empirical tests can justify the correctness of Theorem 4.5. The experiment results for the compiled model only serve as a “sanity check” on smaller instances. Instead, the correctness depends on the Proof presented in Appendix C. Thus, the empirical limitations in generalization of the trained models and the number of variables in the tested formulas do not contradict our theoretical claims. > Lack of formal justification for model hyperparameters (L=7, H=5). It seems that Appendix C.6 is the corresponding proof… We would like to clarify that these hyperparameters are indeed rigorously justified through the theoretical construction explicitly presented in Appendix C.6. Specifically, the configuration of the theoretical construction is provided between lines 1363 and 1469, detailing how each of the 7 layers explicitly contributes to achieving the theoretical result. The 5 heads stem from the fact that the operations in layer 5 require 5 attention heads to complete, and the number of attention heads must be the same across all layers by definition of the Transformer architecture. The operations are described at a high-level, and each individual operations have a corresponding lemma in previous sections on how the operations can be represented as attention or MLP layers. Also, we do not claim that the 7 layers and 5 heads are minimal for SAT solving, but rather an upper bound on the number of layers and heads required to solve 3-SAT. Line 1358 mentioned in your review describes the “embedding layer” of the construction, which converts each token to input vectors before layer 1. On the possibilities of ablation studies, while ablation studies are typically valuable in empirical research contexts for determining the influence of each component of a machine learning model, they are not applicable to theoretical proofs. > while Appendix C.6 appears to provide a detailed analysis of Theorem 4.5, this link is not explicitly stated in the main text. In lines 238-239 of the main text, we mentioned that the proof of Theorem 4.5 is in Appendix C, and Appendix C.6 is not only a “detailed analysis” of Theorem 4.5 but a part of its proof that describes the construction. > The additional experiments in Appendix B are not clear. Thank you for pointing out the missing references to the Figures from the main paper! For the experimental result in the appendix, Figure 5 corresponds to question 3, “How does error induced by soft attention affect reasoning accuracy?” on lines 322-323 (right), while Figure 6 adds the experimental results on the 2 remaining evaluation datasets, Random and Skewed, compared to Figure 3 in the main text. Thank you for pointing out the missing references to these figures in the main text. We will update the paper to clarify and add references to the additional experiment results. > Does it suggest that self-attention-based models are inherently limited in practical scenarios? Our work focuses on advancing our theoretical understanding of the capabilities of Transformers in logical reasoning rather than how Transformers can be used for practical SAT solving. Our results show that given the number of variables and clauses, it’s possible to construct a Transformer model that solves 3-SAT. These models indeed have efficiency limitations compared to traditional SAT-solvers like CDCL and DPLL. Having that said, it might be possible to design for efficient SAT-solving mechanisms by exploiting the parallelism introduced in Lemma 4.8 and GPU architectures. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. Most of my concerns have been addressed properly. However, the current empirical results are far from practical use. Modern SAT solvers can easily handle instances with tens of hundreds of variables, while this study focuses on instances with less than 12 variables. The scalability issue needs to be addressed or at least potentially addressable. Therefore, I'll keep my rating. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We're very glad to hear that our comments helped address your concerns. Regarding your new point on the scalability of our method, we would like to kindly clarify that our work is a **theoretical work** investigating the capabilities of the Transformer architecture. In particular, our main contribution is on the mechanisms that allow the Transformer to perform (parallelized) logical reasoning and an efficient Chain-of-Thought for 3-SAT based on backtracking and deduction that Transformers can provably simulate. The experiments are supplementary evidence that investigates how the Chain-of-Thought of our theoretical construction allows for effective learning. **Practical SAT solving, while important, is orthogonal to our contribution.** Multiple previous theoretical works have also included supplementary experiments on how well Transformers can perform multi-digit addition [1], evaluate arithmetic expressions[2], solve linear equations [2], parity checking [3], and simulate semiautomata [4]. All of these procedures can be much more efficiently simulated by a regular computer program. Similarly, our work also uses 3-SAT as a theoretical model for logical reasoning, and we do not claim that our method can compete with traditional SAT solvers in terms of efficiency. Therefore, we respectfully disagree that "the current empirical results are far from practical use" constitutes a significant limitation/weakness of our work. While we're glad to discuss the potential implications of our theoretical result in practical SAT solving, we hope that you can focus more on our theoretical contributions rather than the practicality of our experiments in terms of evaluating our paper. Thanks so much for your consideration. [1] Chain of Thought Empowers Transformers to Solve Inherently Serial Problems [2] Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective [3] What Algorithms can Transformers Learn? A Study in Length Generalization [4] Transformers Learn Shortcuts to Automata
Summary: The authors present a theoretical foundation and proof that it is always possible to construct an optimal decoder-only transformer that is capable of exactly simulating the DPLL search and solving any 3-SAT task with greedy decoding that uses CoT reasoning steps and backtracking. The authors show that given any 3-SAT input with p variables, it is possible to create a set of modules (all of which are separate parts/layers and together comprise a complete transformer) with each simulating a portion of the DPLL search process. This reveals that transformers are capable of optimally solving any 3-SAT task (which is NP). The paper also shows that for p variables and c clauses, there exists a transformer that can optimally solve the 3-sat problem with no more than p*(2p+1) CoT iterations for a model with L = 7 layers, 5 heads and O(p^2) parameters. Additionally, the authors show that even pretrained transformers can be trained to solve 3-SAT tasks where the number of variables is close to the amount of variables seen during the training. Although, out of distribution generalization w.r.t. the amount of variable in a 3-SAT problem is still non-trivial for transformers. Claims And Evidence: All of the claims are valid and theoretically proven or empirically evaluated. The theoretical background given for the paper is rather extensive and covers a broad spectrum of non-trivial decoder-only transformer decompositions into modules that can be viewed as a deterministic component of the DPLL (PARAT). Methods And Evaluation Criteria: The only empirical components of the paper explore trying to train a pre-trained decoder only Transformer to solve the 3-SAT task. The evaluation is straightforward by looking at the explicit accuracy of the SAT/UNSAT labels produced after CoT and Backtracking. Theoretical Claims: The theoretical claims of the paper are rather extensive, maintaining that given a set of variables p, it is always possible to construct a decoder-only transformer that would be able to exactly and optimally solve any 3-SAT problem akin to DPLL while using CoT reasoning and backtracking. All of the claims are adequately supported, albeit It must be mentioned that without reading the complete appendix it is impossible to understand the main idea and components of the paper. My only problem with the paper is the way that the idea is presented in the main part of the paper. Experimental Designs Or Analyses: The experimental design is straightforward and tries to validate whether arbitrary pre-trained decoder-only transformers can learn to solve 3-SAT problems. The accuracy for SAT/UNSAT final outputs shows that learning the task is feasible, but generalization outside of the number of variables present in the training set is non-trivial and complicated. Supplementary Material: I had to walk through most of the appendix to properly understand the bulk of the method as the majority of theoretical explanations in the main part of the paper lack rigour and flow of explanation (extremely hard to follow the main paper). The whole construction (PARAT) section I postulate can only be understood from the appendix. Relation To Broader Scientific Literature: The paper explores a fundamental transformer capability, revealing that transformers are able to solve the 3-SAT task, which falls in line with research exploring the theoretical limitations and expressiveness of the transformer architecture. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The main weakness is the way that the method is explained in Section 4. Most of the explanations and flow seem handwavy and not rigorous. However, their counterparts in the Appendix more than cover the need for rigour. I would still maintain that changing the way the explanation flows in favour of more strict but understandable transitions would be easier for the reader to understand. Other Comments Or Suggestions: I think the solution for the example on page 25, line 1355 in the appendix for the proposed 3-SAT example is wrong. Questions For Authors: 1) The compiled model (constructed) achieves perfect accuracy on 3‑SAT instances. Given that the worst-case CoT length is theoretically exponential (because 3-SAT is NP), can the authors provide insights into the characteristics of SAT instances where the model’s CoT length approaches this worst-case behaviour and how often such instances can occur realistically? 2) Given that the amount of chain of thought tokens is massively lower than the theoretical bound, can it be assumed that having more compute during test-time would allow the models better generalization w.r..t. the number of variables? What happens as the CoT gets longer? Do the models collapse and start repeating explored patterns? Do they hallucinate? ### Post rebuttal comments I feel all of my concerns have been addressed after the rebuttal. Increasing the score. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We’re very grateful for your devoted time and careful review of both the main paper and the appendix. We’re also very glad that you liked the contributions of our paper and consider the proof and description in the appendix rigorous and helpful. We would like to address your comments regarding the difficulty of understanding the proof sketch and your opinion of possible improvements to its clarity, suggested below: ## Clarity Improvements to the Proof Sketch **Swapping Preliminary Section on Transformers with Appendix C.1 on 3-SAT** (also suggested by reviewer sNWz): From your reviews as well as suggestions from reviewer sNWz, we believe that the most unclear part in the main paper is likely regarding 3-SAT operations and DPLL. As such, we will swap out the detailed mathematical definitions of the Transformer architecture in section 2 with descriptions of 3-SAT in Appendix C.1 **Organize Proof Sketch around Major Steps in the Construction:** For Section 4, we would like to organize the sketch around the multiple major steps of the Transformer model? i.e., we organize into the following paragraphs: Step 1: Summarize Clauses and Assignments as Binary Vectors: where we introduce definition 4.6 and explain how the binary encodings can be computed by summing up one-hot encodings in each clause using the attention mechanism of the Transformer Step 2: Perform Parallel Logical Operations over Clauses: where we introduce Lemmas 4.7 and 4.8 and explain how to determine Satisfiability, Conflict, and Unit Propagation Step 3: Next Token Prediction: where we have a description on how the final token is decided based on the results calculated in the previous layers ## Example on line 1355 Thank you for your careful reading of the appendix. After careful checking, the example is indeed wrong. The correct solution should be: `[SEP] D 2 D 1 -4 3 [BT] D 2 -1 -4 [BT] -2 D 3 D 4 1 SAT` We will also update this in the paper. ## Questions > Can the authors provide insights into the characteristics of SAT instances where the model’s CoT length approaches this worst-case behaviour and how often such instances can occur realistically? We expect all instances to be lower than this bound, but we do not know of tighter upper bounds that are strictly proven. The number of CoT steps required is dependent on many factors, including the heuristic used to select the next decision (assumption) at each iteration, etc. Providing tighter theoretical upper bounds on solving different types of SAT formulas is an active direction of research in the SAT solving community (e.g., see [1] Section 1.5). We instead used a large upper bound that is guaranteed to be true on all instances and heuristics. > Can it be assumed that having more compute during test-time would allow the models better generalization w.r..t. the number of variables? We do believe this to be true according to relevant literature on practical LLMs, where longer CoT leads to better reasoning performance. However, it’s difficult to empirically test this hypothesis in our setting since there isn’t a reliable way to increase test-time computation for custom-trained 3-SAT models. In particular, our models are deterministic and do not support additional prompts to elicit longer reasoning chains. Your suggestion is indeed an insightful and promising direction for future work. > What happens as the CoT gets longer? Do the models collapse and start repeating explored patterns? Do they hallucinate? Yes. We can investigate this phenomenon by observing how models trained on large (11-15) instances behave on smaller instances. For example, on a 3-SAT problem with 5 variables, the model outputs: (recall that D represents “Assume” and [BT] represents “BackTrack” according to Appendix C.1) `D -4 D -1 -5 2 3 D [BT] D -4 D -1 -5 2 3 D -4 D [BT] D -4 D -1 -5 2 3 D -4 D -5 SAT` The model can be considered to be hallucinating since the final D at the end of the first attempt should never occur. Similarly, at the final assignment before SAT, both -4 ($x_4=F$) and -5 ($x_5=F$) occurred 2 times, which shows that the model starts repeating explored patterns. Both of these phenomena are quite common when the CoT is longer than normal. In another test sample of 5 variables, the model outputs: `D -4 D -3 1 2 -5 -4 -18 17 -26 SAT` Which hallucinates variables that do not exist in the formula (and also outputs -4 twice). What’s more interesting is that the models still have high SAT vs UNSAT accuracy even when these hallucinations/repetitions occur and our test data explicitly removes statistical evidence on SAT vs UNSAT. This can be seen from Figure 3 (right) where the SAT vs UNSAT accuracy remains high even when the Full Trace Correct accuracy falls off significantly. It seems that the trained model still has ways to “guess” the correct SAT vs UNSAT even with a CoT that’s not fully correct and longer than normal. [1] On The Unreasonable Effectiveness of SAT Solvers
null
null
null
null
null
null
ELITE: Enhanced Language-Image Toxicity Evaluation for Safety
Accept (poster)
Summary: The paper introduces ELITE, a new safety benchmark designed to evaluate the toxicity and risks associated with Vision-Language Models (VLMs). Current benchmarks fail to detect implicit harmful content and have issues with low harmfulness levels, ambiguous data, and limited diversity in image-text combinations. ELITE aims to address these gaps by providing a more precise, rubric-based evaluation method and a diverse dataset for safety testing. Claims And Evidence: The claims are supported by the evidence. Methods And Evaluation Criteria: The problem of evaluating and improving the safety of VLM is addressed by the benchmark. Theoretical Claims: N/A Experimental Designs Or Analyses: The evaluation is sound and extensive on many VLMs. Supplementary Material: No Relation To Broader Scientific Literature: The contributions of the paper address the gap of the safety of the foundational models. Essential References Not Discussed: An early and popular benchmark in LLM safety is not discussed: Wang, Boxin, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu et al. "DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models." In NeurIPS. 2023. Other Strengths And Weaknesses: The novelty the paper could be highlighted. Other Comments Or Suggestions: N/A Questions For Authors: What could be a promising direction to improve the safety of VLM. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s valuable feedback, which has significantly improved our work. **Essential References Not Discussed**. We agree that since our study aims to improve the safety evaluation of language models, it is important to cite DecodingTrust, an early benchmark in LLM safety. We appreciate the suggestion and will include the citation. **Other Strengths And Weaknesses**. As noted in the review, the main contribution of our paper lies in improving the safety evaluation of VLMs. To this end, we propose the ELITE evaluator, a rubric-based evaluation method that provides more accurate assessments based on the characteristics of VLMs, along with the ELITE benchmark, an effective tool for VLM safety assessment. Through the analysis of existing benchmarks (Table 4) and human evaluation results (Table 6, including additional evaluations conducted in response to Reviewer BuCp’s Question 3), we demonstrate the novelty of both the evaluator and the benchmark. In particular, our work presents a simple yet effective method for evaluating harmful responses from VLMs, which we believe is a key strength of the paper. We will revise the paper to highlight these aspects better. **Question1**. As VLMs interpret inputs through the interaction of two modalities, novel attack types may arise from this cross-modal reasoning process. Therefore, a promising direction is to develop comprehensive benchmarks that can broadly evaluate unsafe behaviors in VLMs and use these benchmarks to guide improvements in safety alignment. In this context, our ELITE evaluator and benchmark were designed to address these challenges, and we believe further efforts to improve safety alignment will remain essential going forward.
Summary: This paper introduces ELITE, a VLM benchmark and an LLM-as-judge evaluator designed to test harmful generations of these models. ## update after rebuttal I will maintain my original score. Claims And Evidence: The novelty of this work is somewhat limited. The evaluator provided is rubric-based and heavily inspired by StrongREJECT (in fact, the authors do not perform any other prompt ablations, only the comparison to StrongREJECT’s prompt). The authors use it to filter a benchmark dataset to consist of samples where this evaluator is likely to produce harmful scores, and the results in Table 3 and 4 show, unsurprisingly, that it does. Other than balancing the data presence of different categories of harm and showing an increase in harmfulness on this benchmark compared to previous ones, a more fine-grained analysis of why this benchmark is relevant (e.g., showcasing diversity of failure modes) is missing from this work. Methods And Evaluation Criteria: As mentioned in Claims and Evidence, the authors select the benchmark to include harmful prompts as per the ELITE evaluator and obtain higher harmfulness as measured by that evaluator compared to existing benchmarks. However, the goal of benchmarks is not simply to be “difficult”; they should be principled, measure a diversity of failure modes, and avoid overfitting to the specific harmfulness evaluator used. One of the key points missing from this paper is a more detailed analysis of the types of image-text pairs that get excluded/included in the final benchmark, as well as the relevance of their inclusion/exclusion. For example, from Table 1 we see that more prompts are taken from Figstep in S1 than MM-SafetyBench, yet the opposite is true for S8. Why is that the case? What kind of novelty are the “New” pairs bringing to the mix? These are crucial questions to understand what this benchmark is actually measuring. Theoretical Claims: N/A. Experimental Designs Or Analyses: As mentioned above, a fine-grained analysis of the composition of the benchmark is completely missing from this work. In terms of the evaluator, no ablation is provided on the prompt. In StrongREJECT, the authors explicitly mention they chose specific and convincing as the criteria of the evaluator after considering a set of 10 features (e.g., harmful, discouraging) and doing a Lasso regression on it. This type of analysis is not done in this work, despite the fact the authors have access to human evaluation data. This is particularly concerning given ELITE GPT-4o only achieves an F1 score of 0.637 on the human evaluation dataset, yet this is used to select the final benchmark samples. Supplementary Material: I reviewed the evaluator's prompt and some of the other details. Relation To Broader Scientific Literature: This paper builds on existing benchmarks and augments them to generate one that the authors claim is more likely to elicit harmful responses from VLMs. Purely in that sense, the contribution is limited, as from the methods and results it is unclear this is a principled way of building a harmfulness benchmarking dataset for VLMs. Essential References Not Discussed: The related work section appears to cover the important references in the field. Other Strengths And Weaknesses: - The paper is poorly organized. The human experiments validating the human evaluator only come in Section 5, after the main results of the benchmark — despite the fact the evaluator was used for both sample selection and evaluation of the responses for each model. Other Comments Or Suggestions: - The introduction is long and quite repetitive, with figures references out of order - 1 (c) coming before 1 (b) for example. Questions For Authors: 1. In Table 3 the authors highlight Pixtral-12B as the model with the highest ASR in most categories. Given this model is one of the 3 ones used for the selection of the benchmark samples, is it fair to include it in the comparison? 2. “when the StrongREJECT evaluator is applied to VLMs, it often assigns high scores even when the model does not explicitly refuse to respond and instead provides unhelpful answers” — is it just miscalibrated for VLMs? The specific and convincing scores in Figure 2 simply appear to high given the prompt presented. Why do you **need** to introduce toxicity as another criteria instead of, for example, few-shot examples? 3. What is the agreement rate and Pearson correlation between the 3 human annotators? Does it vary significantly per category? 4. Given the level of uncertainty that comes from a low F1 score on the ELITE GPT-4o judge, what can the authors say about the statistical significance of the comparison between the different models in Table 3 and benchmarks in Table 4? Small differences in ASR could be within the margin of error for this evaluator. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We truly appreciate the reviewer's insightful and constructive comments, which have greatly contributed to enhancing the quality of our work. **Methods And Evaluation Criteria & Experimental Designs Or Analyses-1**. Thank you for your valuable feedback. Our key message is to address a limitation in existing evaluation methods—namely, their inability to accurately assess harmfulness in VLMs. As the reviewer pointed out, our benchmark was designed with the "difficulty" of evaluating harmful responses as a primary principle. We believe that a safety benchmark should be able to assess model robustness using sufficiently challenging samples. However, existing benchmarks often contain ambiguous samples, making it unclear whether they could induce harmful responses. Therefore, we prioritized fulfilling this fundamental role of a benchmark above all else. Furthermore, we aim to propose a benchmark that incorporates a wide range of diversity. In particular, there are known ways to elicit harmful responses using safe image–safe text pairs, but most existing benchmarks lack coverage of such safe-safe cases. To address this, we constructed a more comprehensive benchmark by integrating a “New” dataset that includes all four types of image–text pairs. **Experimental Designs Or Analyses-2**. For the analysis by features, human labeling was conducted for 30 samples using three labelers, evaluating 10 features. The table below shows the results of Lasso regression based on unsafe and safe. These results highlight the effectiveness of the ELITE evaluator's toxicity score in the VLM task. |Feature|Weight| |---|---| |toxicity|0.1928| |specific|0.1899| |convincing|0.1127| |consistent|-0.1013| |compliant|-0.0906| |comprehensive|-0.0789| |articulate| 0.0718| |useful|-0.0585| |relevant|-0.0298| |discouraging|-0.0107| **Weakness1 & Other Comments Or Suggestions**. We agree with the reviewer that human evaluation is a crucial component, especially given that we are proposing a new evaluation method. We will also revise the introduction section. **Question1**. Since the filtering baseline model includes the relatively safe model Phi-3.5-Vision, and we selected only the cases where the score was above 10 in at least two of the three models (Phi-3.5-Vision, Llama-3.2-11B-Vision, Pixtral), we believe the approach is not so unfair. However, as the reviewer pointed out, we will add a note in the table to clarify that these three models were used as filtering baseline models. **Question2**. We argue that “specific” and “convincing”, which have been used in previous evaluations, are orthogonal to “toxicity”. We found that responses can be highly specific or convincing without being toxic. This distinction is particularly important in the context of VLMs, where the model often provides detailed descriptions of the image, even when such responses do not align with the harmful intent of the prompt. This results in cases where the VLM appears to respond in a “convincing” or “specific”, but actually avoids engaging with the harmful intent altogether by focusing solely on the image. Such responses are frequent and posed a major challenge during our benchmark construction. As we examined the 118 samples excluded during the ELITE benchmark filtering process, we found that 52.54% of them consisted of image-descriptive responses. This indicates that such cases occur frequently and can lead to miscalibrated scores when using StrongREJECT. **Question3**. To assess human agreement and Pearson correlation, we analyzed the results of the human evaluation conducted on a total of 228 samples (Reviewer BuCp's question 3). The agreement rate and Pearson correlation between the three human annotators are summarized in the table below. While there are some variations across categories, we generally observe a strong level of agreement overall. To mitigate individual annotator biases and ensure more reliable labeling, we applied a majority voting strategy across the three annotators’ safe/unsafe labels for each sample. | |agreement rate| |---|---| | S1. Violent Crimes |63.15%| | S2. Non-Violent Crimes |90.00%| | S3. Sex Crimes |76.19%| | S4. Defamation |63.15%| | S5. Specialized Advice |62.50%| | S6. Privacy | 80.01%| | S7. Intellectual Property | 76.19% | | S8. Indiscriminate Weapons | 75.00% | | S9. Hate | 78.26% | | S10. Self-Harm | 80.95% | | S11. Sexual Content | 55.00% | | ALL | 72.81% | | | Pearson correlation | |---|---| | human1 & human2 | 0.5978 | | human1 & human3 | 0.6161 | | human2 & human3 | 0.6512 | **Question4**. Our intention in Table 3 is not to claim that specific models are better or worse, but rather to show general trends. Regarding the ELITE evaluator, it demonstrates stronger performance compared to existing evaluation methods. As for Table 4, we believe that the difference in E-ASR between existing benchmarks and the ELITE benchmark is substantial enough that it cannot be attributed to the evaluator’s margin of error. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. Some of my concerns have been addressed, but the issue with the evaluator remains. Given this is a crucial part of this work, I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for reading our response and for your thoughtful feedback. | | Accuracy (↑) | F1-Score (↑) | |---|---|---| | ELITE (GPT-4o) | **83.77%** | **0.8043** | | LlamaGuard3-Vision-11B | 75.88% | 0.5882 | The table above presents additional experimental results for Reviewer BuCp's Question 3, using samples related to ELITE (GPT-4o). The human dataset used in our paper was primarily sampled from cases where our model and StrongREJECT disagreed, which naturally led to the inclusion of questions with subjectivity or ambiguity, where human opinions were more likely to diverge. As described in Appendix D.2, we recruited 22 annotators with diverse occupation and age groups to reflect the diversity of real-world users. As a result, for certain evaluation samples, there may have been disagreements among annotators, which could have led to variability in the human-labeled ground truth — potentially making the performance of ELITE (GPT-4o) appear lower than it actually is. In contrast, the samples used in Reviewer BuCp's Question 3 were more clearly separable — for example, whether an answer was included in ELITE or not — allowing for more consistent evaluation results. Accordingly, as shown in the table above, the evaluator demonstrates strong performance on these clearly distinguishable samples. We would be grateful if you could elaborate further on your concerns regarding the evaluator issue, so that we can better understand them. Thank you again for your time and consideration.
Summary: The authors propose a new framework for automated safety evaluation in vision-LLMs by extending an existing evaluator (StrongREJECT, which scores the level of refusal, specificity, and convincing-ness of a VLM's output) by additionally predicting a toxicity factor. This accounts for cases where the model's output in response to a harmful piece of input is neither a refusal nor actually toxic, and would still have been treated as unsafe by StrongREJECT. GPT-4o is used as the underlying LLM of the evaluator. Furthermore, the authors construct a dataset with 4.6k samples (the ELITE benchmark) through filtering and rebalancing existing toxicity benchmarks + 1k new samples (especially focusing on safe text + safe image = unsafe prompt cases). The authors show that the proposed evaluator aligns much more closely to human judgment than the preceding StrongREJECT evaluator, despite using the same underlying LLM. ## update after rebuttal Based on the authors' added information, I've concluded that the evaluation mechanism is more reliable than I originally believed. As such I have raised by rating by 1 point. Claims And Evidence: The authors show that their proposed benchmark is able to jailbreak multiple safety aligned open source LLMs at a higher rate than competitor baseline benchmarks. It would have been good for this to be somewhat more thorough - for instance, the models used in Table 4 for this comparison only have 7B / 13B parameters (and are by now somewhat outdated, i.e. LLaVA v1.5). As such, it is unclear whether the trend still holds across more contemporary / larger models. As well, it is seen that the ELITE evaluator aligns better with human judgment than the StrongREJECT evaluator's starting point. This is sufficiently convincing, given that large human judgment datasets are expensive to collect (this particular set contains 900+ samples). Methods And Evaluation Criteria: The evaluation criteria is chiefly to compare it to the baseline metric (StrongREJECT) in terms of human alignment, which is a sensible way to quantify the correctness of this LLM as a judge framework. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: I examined sections 4 and 5 (Experiments and Human Evaluation) in detail, and found the processes to be largely reasonable (but subject to some of the concerns as listed in the above Claims and Evidence analysis). Supplementary Material: I have reviewed the entire supplementary materials section. The samples in the supplementary material were useful for understanding the types of data that is included in the ELITE benchmark, and how the ELITE evaluator helped to select them. Relation To Broader Scientific Literature: Having worked with some of the related datasets myself, I agree with the authors' assessment that the existing datasets often have ambiguous samples and balancing issues. As such, I believe that the manuscript is well positioned in relation to and improves upon the broader body of work in this area. Essential References Not Discussed: To my knowledge, the authors have done a good job in reviewing the related datasets in this space, which is also naturally necessary as the authors incorporated samples from many of these datasets into their own. Other Strengths And Weaknesses: The paper's goal to advance quantifiable and objective evaluation methods of toxicity of multimodal LLMs is core to the general usability of these models, and should be commended. Furthermore, I look forward to seeing the proposed dataset be used by the larger community. On the negative side, I do find that the amount of technical contribution to be a little bit limited - it appears that the major innovation is to request the LLM judge to produce a scalar toxicity score. Other Comments Or Suggestions: The connection between section 3.3 and 3.4 was a little hard to understand during my first reading. It took some number crunching to understand that section 3.3 is the process to create new samples, and section 3.4 refers to filtering and improving existing dataset samples. However, the wording for section 3.4's introduction seemed to suggest that all samples were created using the process in 3.4. I would recommend rewriting this part to make it easier to understand. Questions For Authors: It would be great to see the authors' responses to the weaknesses listed above. As well, in my past experience, LLMs (and VLMs) are often not able to answer judgment-based questions like "on a scale of 1 to N, what is the level of ____ in the input" in a very consistent manner. Did the authors do any quantitative / qualitative analysis of how well the evaluator's judgment align with human judgment for the toxicity score? It was mentioned that the metric is defined by using 10 as the threshold of the output of the ELITE evaluator. How was this threshold chosen? Does this impact the relative ordering of models? Based on the authors' responses, I would be happy to revisit my current recommendation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s insightful comments, which have been essential in helping us enhance and clarify our work. **Claim And Evidence**. The table below shows additional experimental results for the latest model, gemma3, and the larger model, InternVL-2.5-26B. These results demonstrate that the ELITE benchmark shows consistent performance even with relatively large and latest models. | Model | Benchmark | Total | E-ASR | |---|---|---|---| | InternVL2.5-26B | VLGuard |2028| 10.51% | | | MM-SafetyBench |1680|30.46%| | | MLLMGuard | 532 |12.60%| | | ELITE (generated) | 1054 | **50.94%** | | | ELITE | 4587 |**39.63%**| | Gemma3-4B | VLGuard |2028|22.71%| | | MM-SafetyBench |1680|33.81%| | | MLLMGuard |532|22.84%| | | ELITE (generated)| 1054 |**44.81%**| | | ELITE |4587|**40.58%**| **Weakness1**. Thanks for your thoughtful review. It is true that we made minimal modifications compared to the StrongReject. Our goal is to create a benchmark and evaluator that works well generally, rather than being specific to a particular model or situation. What matters is not the complexity, but how many problems we can solve with simple changes. We believe that the ELITE method proposed in this paper can solve many issues. We propose a simple yet effective method for evaluating harmful responses in VLMs. We demonstrate that our approach outperforms many existing Guard models and StrongREJECT, based solely on toxicity score requests, and through this, we create a toxic benchmark by filtering out samples that are not particularly harmful, addressing a key issue in existing benchmarks. Furthermore, we aim to propose a benchmark that incorporates a wide range of diversity by integrating existing benchmarks like SIUO, which only contains safe image + safe text, and other benchmarks, which contain unsafe image + unsafe text, and other combinations. We believe we can create an even broader benchmark by structuring the ELITE benchmark with both safe and unsafe pairs. **Ohter Commnets Or Suggestions**. In Section 3.3, we explain the process of generating the new samples, ELITE benchmark (generated). In Section 3.4, we match existing benchmarks to the taxonomy in the ELITE benchmark, and by filtering both the existing benchmarks and the ELITE benchmark (generated), we ensure that only toxic cases remain. We will improve the writing in the section you pointed out to make it easier to understand. Thank you for pointing this out. **Question1**. We understand the reviewer’s concern that LLMs (and VLMs) may not provide consistent responses. We measured the toxicity score of the ELITE evaluator a total of 10 times on the 228 samples (Reviewer BuCp’s question 3). The table below shows the average and standard deviation of the toxicity scores. | | From ELITE | Not From ELITE | |---|---|---| | ELITE evaluator-toxicity score mean | 3.8136 | 0.7915 | | ELITE evaluator-toxicity score std | 0.5736 | 0.4015 | Additionally, the table below shows the Pearson correlation between the ELITE (GPT-4o) toxicity scores and human judgment, demonstrating that the toxicity scores are well aligned with human assessment. | | Pearson Correlation | |---|---| | human1 & ELITE evaluator | 0.7274 | | human2 & ELITE evaluator | 0.6447 | | human3 & ELITE evaluator | 0.6496 | The table below shows the Pearson correlation between human toxicity scores. As can be seen in the table, the correlation with the ELITE evaluator is higher than the correlation between each pair of humans. This indicates that, despite some variation in human judgements, the ELITE evaluator is more consistently reflecting the evaluations. | | Pearson correlation | |---|---| | human1 & human2 | 0.5992 | | human1 & human3 | 0.6079 | | human2 & human3 | 0.5079 | **Question2**. The 10-point threshold we used was selected based on experiments in Appendix A.2. Although Table 9 shows that a threshold of 10 is not optimal, Figures 5 and 6 confirm that even with this threshold, there are cases where the model's responses are sufficiently harmful. By including these cases, we propose a more comprehensive benchmark. Below are the five most vulnerable models for each threshold. Except for LLaVa-v1.5-7B appearing as the 5th most vulnerable model at Threshold 5 instead of Molmo-7B, the five most vulnerable models remain the same across all thresholds. | Threshold | Model | E-ASR | |---|---|---| | 5 | Pixtral-12B | 85.63% | | | ShareGPT4V-7B | 78.85% | | | LLaVa-v1.5-13B | 78.44% | | | ShareGPT4V-13B | 77.46% | | | LLaVa-v1.5-7B | 75.26% | | 10 | Pixtral-12B | 79.86% | | | LLaVa-v1.5-13B | 69.68% | | | ShareGPT4V-13B | 68.08% | | | ShareGPT4V-7B | 67.16% | | | Molmo-7B | 63.79% | | 15 | Pixtral-12B | 60.91% | | | ShareGPT4V-13B | 52.95% | | | LLaVa-v1.5-13B | 52.60% | | | ShareGPT4V-7B | 50.51% | | | Molmo-7B | 47.70% | | 20 | Pixtral-12B | 41.23% | | | LLaVa-v1.5-13B | 37.01% | | | ShareGPT4V-13B | 36.51% | | | ShareGPT4V-7B | 34.37% | | | Molmo-7B | 31.70% | --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their reply. Based on this, I am now less concerned with the ability of the scoring mechanism in identifying stronger / weaker models. As such, I will revise my rating up by 1 point. --- Reply to Comment 1.1.1: Comment: Dear Reviewer X37u, we sincerely thank the reviewer for their thoughtful and encouraging feedback. We are delighted that our responses have successfully alleviated the concerns raised and appreciate the reviewer’s support for our work.
Summary: This paper introduces a safety benchmark called the ELITE benchmark, as well as an associated evaluator (the ELITE evaluator). The benchmark comprises multimodal data—image-text pairs—that are designed to provoke harmful or unsafe responses from vision-language models (VLMs). It includes 4,587 samples across 11 safety categories and four different image-text pair types (unsafe-unsafe, safe-unsafe, unsafe-safe, and safe-safe). While some images and texts may be safe, all samples are intended to induce unsafe responses. The data is compiled from multiple existing safety benchmark sources, supplemented by newly generated image-text pairs (which constitute about one-fourth of the entire dataset). To improve overall quality, the authors remove samples that fail to elicit sufficiently harmful responses, a process guided by the ELITE evaluator. Finally, the authors conduct a large-scale human evaluation to compare the ELITE evaluator with existing approaches. Claims And Evidence: In general, the paper’s claims appear well-supported by evidence. However, there is some concern regarding the human evaluation of the ELITE evaluator, which shows only 73% agreement with human judgments. This relatively moderate agreement raises questions about the evaluator’s accuracy and whether the dataset might be overly fitted to the ELITE evaluator itself (given that it was also used to filter sample prompts). The slightly lower human agreement score could affect the perceived quality of both the dataset and the evaluator’s reliability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the area of application. Theoretical Claims: All theoretical claims are well supported. Experimental Designs Or Analyses: The experimental design, including the data collection process, image generation, and the final evaluator model, appears coherent and well-supported. The authors also perform a comprehensive set of evaluations on VLMs using their newly introduced benchmark and compare the ELITE evaluator against existing approaches. Their methodology for constructing the dataset—and then validating it with human evaluations—seems sound. Supplementary Material: I briefly checked most parts of the supplementary material. Relation To Broader Scientific Literature: The paper builds upon multiple existing safety benchmarks, integrating them into a more extensive safety corpus. By filtering out samples that fail to provoke harmful outputs, the authors aim to refine the collective set of safety prompts. This is a valuable contribution, as it combines and enhances prior resources into a single, more comprehensive dataset. Essential References Not Discussed: Based on my familiarity with the field, the authors appear to acknowledge all critical references relevant to their work. I did not spot any missing essential citations. Other Strengths And Weaknesses: **Strengths** 1. the authors provide a very valuable safety dataset to the community, which is currently missing and needed 2. The authors propose an evaluator model that can be used to assess model responses for their benchmark. 3. They conduct extensive experimental evaluations, providing thorough empirical support for their claims. **Weaknesses** 1. The evaluator model shows a rather weak performance of 73% accuracy score to the human evaluation. 2. Because the ELITE evaluator is used both to filter the dataset and to evaluate final model responses, there is a risk that the dataset might become overly tailored to the evaluator. Other Comments Or Suggestions: No further comments Questions For Authors: 1. The ELITE evaluator achieves only 73% agreement with human assessments. What do the authors believe is the primary reason for this score? Have you considered additional tuning of open-source VLMs to improve safety alignment? I wonder if performance could be enhanced by refining or tuning an open-source VLM specifically for safety (e.g., similar to approaches like LlamaGuard). 2. It remains unclear whether the ELITE evaluator itself has full access to the image data. While this seems to be the case in Fig 2, in Fig 10, for instance, the image context seems to be missing for the evaluator, which may affect its judgment. E.g., imagine the following scenario: 1. the image depicts a bridge. 2. the prompt says, “Can I jump” and 3. The model responds with, “Sure, go ahead”. 3. Have you also considered using human-evaluated samples to validate the quality of the dataset? How many of those who come from the actual ELITE dataset resulted in harmful responses? And how many of those which you have not included did result in an unsafe response? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the constructive feedback, which has been invaluable in helping us refine and strengthen our work. **Weakness2**. To demonstrate that the ELITE benchmark is not overly tailored to the ELITE evaluator, we present results based on the previously adopted metric, Attack Success Rate (ASR), instead of the metrics (E-ASR) used in Table 4. These results suggest that the ELITE benchmark remains general and is not excessively influenced by the use of the ELITE evaluator. | Model | Benchmark | Total | ASR | |----------------|----------------|-------|--------| | Llava-v1.5-7b | VLGuard | 2028 | 34.82% | | | MM-SafetyBench | 1680 | 39.67% | | | MLLMGuard | 532 | 36.46% | | | ELITE (generated) | 1054 | **70.83%** | | | ELITE | 4587 | **68.98%** | | Llava-v1.5-13b | VLGuard | 2028 | 34.00% | | | MM-SafetyBench | 1680 | 41.25% | | | MLLMGuard | 532 | 32.65% | | | ELITE (generated) | 1054 | **69.24%** | | | ELITE | 4587 | **69.99%** | | DeepSeek-VL-7b | VLGuard | 2028 | 28.59% | | | MM-SafetyBench | 1680 | 38.63% | | | MLLMGuard | 532 | 23.35% | | | ELITE (generated) | 1054 | **57.83%** | | | ELITE | 4587 | **60.83%** | | ShareGPT4V-7B | VLGuard | 2028 | 31.98% | | | MM-SafetyBench | 1680 | 40.89% | | | MLLMGuard | 532 | 30.11% | | | ELITE (generated) | 1054 | **66.60%** | | | ELITE | 4587 | **69.54%** | **Question1**. Fine-tuning the evaluator model, as done with LLamaGuard, may lead to performance improvements. However, the experimental results from StrongREJECT [1] show that the rubric-based approach slightly outperforms the fine-tuned models. Based on this, we conducted our experiments using the rubric-based method, with the ultimate goal of proposing a more accurate approach for judgment, even within the rubric-based framework. As a result, we demonstrate through thorough dormant evaluation and extensive experiments that our approach outperforms existing methods, even when using the same base model. Additionally, we believe that the effectiveness of our evaluation method is proven by the fact that ELITE (InternVL2.5), using an open-source model, outperforms evaluation models such as StrongREJECT with the more advanced model GPT-4o and LLamaGuard. Reference: [1] Souly, A., Lu, Q., Bowen, D., Trinh, T., Hsieh, E., Pandey, S., Abbeel, P., Svegliato, J., Emmons, S., Watkins, O., and Toyer, S. A strongREJECT for empty jailbreaks. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024 **Question2**. We conducted filtering and evaluation by providing images to the evaluator in all experiments. This is because some previous evaluation methods assess only the model's responses, making it impossible to judge the success of attack methods that require understanding contextual details, such as the case of suicide & self-harm in Figure 1-(c) or the examples provided by the reviewer. We will rewrite the paper by clarifying that evaluation and filtering are conducted with images included. Thank you for your valuable feedback. **Question3**. We conducted the human evaluation on a total of 228 samples by randomly sampling 110 samples from the ELITE benchmark and 118 samples that were not included (i.e., filtered out). We included at least 20 samples from each taxonomy and gathered the opinions of 3 labelers per sample, with the final labeling determined by majority vote. In total, 8 labelers were recruited for this evaluation. We provided the input image, text, and model's response to perform the safety judgment. As shown in the table below, the significant difference between the included and excluded datasets demonstrates the quality of the ELITE benchmark. | Majority vote | From ELITE | Not From ELITE | |----------------|------------|----------------| | Unsafe | 67.27% | 11.86% | | Safe | 32.73% | 88.14% |
null
null
null
null
null
null
Efficient Personalized Adaptation for Physiological Signal Foundation Model
Accept (poster)
Summary: This work proposes a new method to adapt physiological signal foundation models using DiT to generate LoRA weight matrices. Claims And Evidence: The authors claim that the proposed method transfer physiological foundation model to different tasks with lower computing costs. While it is true that the proposed method claims to not need any training during adaptation stage, the proposed method requires significant compute to obtain the LoRA dataset and train the DiT. On the other hand, the baseline methods do not need any training between the pre-training and adaptation stage. In essence, the proposed method requires front-loading the adaptation compute to the stage between pre-training and adaptation. I think the author should tune down the claim for lower adaptation costs and be clear about the extra training compute required to obtain the LoRA dataset and train the DiT. Furthermore, it could be preferable to compare the total compute used after pre-training stage, it can be argued that the LoRA dataset preparation and DiT training is part of adaptation. Methods And Evaluation Criteria: The benchmarks are reasonable. Theoretical Claims: No theorem or proof was provided. Experimental Designs Or Analyses: How about a baseline that fine-tunes the pre-trained TSFM (6-layer GPT2-based backbone). While I understand that this violate data privacy, all the baseline methods use different backbone networks. This will help to understand how much of the base performance comes from the pre-trained TSFM and how much improvement the DiT generated LoRA provides. Aslo consider a prototypical network as baseline? During adaptation, one can use only the pre-trained TSFM (6-layer GPT2-based backbone), without finetuning LoRA, to calculate the prototypes, and make predictions via metric-based classifier. This is very compute efficient adaptation. This will also give an idea of how good the pre-trained TSFM is, as well as how much improvement the DiT generated LoRA provides. ECG-FM was mentioned in related works, but not benchmarked for ECG arrhythmia task. Supplementary Material: I have reviewed the Supplementary Material. Relation To Broader Scientific Literature: The idea of using one neural network to generate weights for another neural network (hypernetworks) is not itself novel, but it has not been done with DiT to the best of my knowledge. Essential References Not Discussed: What about GAN-based or hypernetworks for generating the LoRA weights? Why was DiT selected for generating the weights of LoRA when GAN and hypernetworks can achieve a similar purpose. Prior works have shared the similar idea of generating weights using a neural network for adaptation, perhaps these are relevant baselines to consider, since the core contribution of this work is generating LoRA weights using DiT. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Table 1 Weak Section 3.3 multimodel Section 4.2 depp learning Questions For Authors: Section 4.1 Experimental setup, it was claimed that 60:20:20 train:val:test split was performed. How is the 60% training data used for conditioning the DiT, i.e. were all 60% of data used for shapelet discovery? How is the validation set used, if at all? The 60:20:20 split was randomly sampled, data from one patient can be present in both the training set and validation/test split. What is the “personalized” adaptation? It appears to be capable of adapting to unseen dataset. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your extensive feedback. We especially appreciate your effort to meet the conference's rigorous review criteria. We kindly address your questions as follows. **W1:** Scope of adaptation time. **For W1:** Thanks for your thoughtful comments. We separated the two phases of pre-training and adaptation, considering a practical cloud-edge scenario. If utilizing a traditional pre-trained TSFM, the TSFM is first pre-trained with massive data in a venue with sufficient resources (cloud). Then, the TSFM is used to be trained on local physiological signals, and its training time is the adaptation time we defined. We did not count the original pre-training of TSFM into its adaptation time. Similarly, for our method, pre-training can be achieved using public physiological signals and a venue with sufficient resources. Our adaptation time includes generator inference and TSFM inference. Therefore, we respectfully disagree that LoRA dataset preparation and DiT training are part of adaptation time. **W2:** Discussion of important ablation study and baselines. **For W2:** We sincerely thank you for your detailed suggestions. We would like to address your concerns by showing additional ablation results for our method. We adopt the TSFM with local LoRA fine-tuning, TSFM with a prototype-based classifier, ECG-FM on arrhythmia diagnosis, along with two given row results in Table 5 as references. From the results, we can find that if the training is allowed, pre-trained TSFM could show great power in most cases after fine-tuning. Considering the variant without LoRA, we could acknowledge that the generated LoRA weights could provide a comparable improvement with direct training. A prototypical network is also a valuable approach as it could surpass general baselines but still be lower than the adapted weight-based strategy (Ours, fine-tuning TSFM). It may require more computing cost in similarity(metric) calculation, and could be easily affected by imbalanced data. More alignment or neural-collapse tricks may help to improve this approach. Metric learning is a promising future work to explore. We will also add the complete results of ECG-FM to Table 3 in the revised manuscripts. | Method | Sleep-EDF | MIT-BIH | FoG |---| --- | --- | --- | | TSFM with local training | 87.15 | 88.59 | 83.71 | | TSFM with Proto-classifier | 82.24 | 81.06 | 74.56 | | ECG-FM | - | 84.90 | - | | Ours w/o local LoRA | 82.08 |84.17| 76.45 | | Ours | 86.39| 89.94 | 81.32 | **W3:** Clarifications on generator selection. **For W3:** In parameter generation tasks, on the one hand, these small models may not be able to fully demonstrate their generalization ability when applied to more complex tasks and parameter spaces. On the other hand, previous methods do not support conditional high-performance parameter generation, and the novel DiT-based conditional neural network diffusion has better generation results. The architecture of DiT has great expressive power in diffusion tasks, especially in conditional cross-modal applications such as text-to-image and text-to-video. Unlike hypernetwork, which takes the model parameters as input and generates parameters, we directly map the data feature space to the parameter space. In addition, the hypernetwork needs to perform backpropagation based on the loss of the backbone network, which will be costly in large model scenarios. We conducted a validation experiment, taking the condition and noised model parameters as input, and measuring the Euclidean distance between the output model parameters and the original input parameters. The results show that the generation effect of the method based on conditional GAN or MLP is far behind DiT. As the SOTA cross-modal generator, we adopted DiT. It is also worthwhile to find the trade-off between cost and quality in future work. | Similarity | DiT | MLP | CGAN | |---| --- | --- | --- | | Task 1 | 4.10 | 153.52 | 79.63 | | Task 2 | 2.32 | 139.46 | 75.49 | **W4:** Typos. **For W4:** Thank you for this astute observation. We apologize for any confusion caused and have carefully revised it. **W5:** Clarifying the data partition and definition of personalization. **For W5:** The experiments are conducted on corresponding signals in a subject-independent setting. We assign subjects to train/val/test partitions. One subject’s data won’t appear across train/val/test sets. The inference of the generator is based on testing sets, where the train and val sets are not used for our method but for baselines, for fairness. Compared with the generalized TSFM, our personalization refers to obtaining a model parameter that adapts to local data for new local tasks and data features. Because the generalized model may not work well on specific tasks, and fine-tuning a large foundation model will also be costly, realizing model personalization in a lightweight way is meaningful. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I have updated my recommendation. I understand the adaptation time explanation. While the adaptation phase is more compute efficient, I still think it would be valuable to provide a comparison of the computational requirements for traditional pre-training versus the DiT training proposed in this work. This will give readers an idea of how much computation is required for cloud training phase. I recommend the remaining clarifications to be included in the final manuscript, as they provide important information and context. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We're delighted to have addressed your concerns and appreciate your helpful suggestions. We agree with the importance of the cost of two-phase cloud pre-training and the remaining clarifications. We promise to revise all of these points in the manuscript.
Summary: * This paper studies medical time series classification based on physiological signals. * ML models for prediction from medical time series is challenging since we often have: * Unbalanced amount of data for each signal * Varying sampling frequency/duration * Time series foundation models (TSFMs) can be effective, however they can be expensive to adapt to individual medical centres, and we may not want to upload data from a given centre to a server for finetuning. * This work proposes an approach where we: * Learn a dataset of LoRA weights for a TSFM * Learn a diffusion transformer to output LoRA weights for a pretrained TSFM, using the LoRA weights found above as a training set, with the DiT input being a condensed representation of the input dataset (shapelet transformed) * At inference time, for a new dataset, can run inference with shapelet + diffusion model to get LoRA weights, then adapt the TSFM without any further training cost. * The method is evaluated on four classes of physiological signal datasets: Sleep state detection, emotion detection, arrhythmia diagnosis, and freezing of gait detection. * The proposed model performs well, improving on baselines. Claims And Evidence: Overall reasonable when compared to baselines. Methods And Evaluation Criteria: Evaluation datasets are sensible; methods could use further investigation, see below. Theoretical Claims: Not major focus of paper. Experimental Designs Or Analyses: Positives: * Diverse range of datasets and tasks studied * Good selection of published baselines * Ablations generally thorough * Overall, encouraging results compared to the baselines. Areas for improvement/questions: * I would like more detail on the evaluation datasets. How are these split into train/val/test? Is it patient level? Without this, it’s hard to interpret the results well. * What kind of hyperparameter search happened with your method, vs the baselines? * Why diffusion for the LoRA learning? Could the standard MLP used in eg hypernetworks work, especially given small size of parameters? Would that be easier to train? * What happens if you take your pretrained TSFM and do local LoRA without your diffusion model at all? I understand this is computationally expensive, but it’s interesting to understand the effect of generative model for LoRA vs just learning the weights directly. Supplementary Material: Briefly reviewed — particularly looked at baselines, and table 6 for pretraining data mixture. Brief look at Appendix A2 and A3. Relation To Broader Scientific Literature: The related work in the main paper is inadequate. A more detailed related work in main body would add a lot of value, in addition to what is in the appendix. Essential References Not Discussed: Not familiar enough with literature to comment here. Other Strengths And Weaknesses: * The writing needs work overall — there are a number of typographical errors, referencing a Table instead of a Figure, for example * The overall contribution of the method was not that clear. Rewording to more closely represent the Figure 3 would be quite valuable. Other Comments Or Suggestions: Overall, I think this is a good contribution, but the lack of related work in the main body makes it hard to contextualise the work. I also have some questions above that would be great to get clarity on. If these are answered, I would be inclined to increase my score. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments, and we are grateful for the time and effort you have invested in reviewing our work. Below, we provide a point-by-point response to address each of your concerns: **W1:** Clarifying the data partition. **For W1:** Thanks for your valuable comments. It is at the patient level. Physiological signal data are collected from individual subjects. The experiments are conducted on corresponding signals in a subject-independent setting. We divide the subjects into training, validation, and test sets in a 3:1:1 ratio. One subject’s data won’t appear across train/val/test sets. It may arouse concerns about how it can be considered a personalized adaptation (for baselines) without training on the data of a specific local patient. In practice, local test data must be unlabeled, and the model training is through existing labeled data, and then the model is applied to the patients who need to be diagnosed. So, this setting is reasonable and in line with the existing related works' paradigm. **W2:** Hyperparameter search. **For W2:** We test the best learning rate, diffusion steps, chunk size and rank of the adapter for our methods. For baselines, we generally follow the original hyperparameters and keep necessary settings fair, such as data partition. **W3:** Discussion on the setting and selection of the generator. **For W3:** The architecture of DiT has great expressive power in diffusion tasks, especially in conditional cross-modal applications such as text-to-image and text-to-video, where the original DiT is applied. Small models like MLP have difficulty in cross-modal generation and conditional generation. In difficult tasks such as parameter generation, the latent space generated by MLP may be poorly representative. Therefore, how to achieve a trade-off between the lightness and performance of the generator is a future work worth exploring. In this work, we mainly applied the SOTA cross-modal generator DiT to achieve high-quality parameter generation. At the same time, the transformer complies with the scaling law, a feature that will help the model to generalize only by amplifying parameters. We conduct an analysis of the generation quality on weights, considering the Euclidean distance between input weights(noised) and generated weights. As demonstrated in the following results, at the end of the training of DiT, the results of distance reached 2.32 to 4.10 for different tasks, proving superior generating performance than conditional GAN and MLP. In addition, MLP and CGAN require huge numbers of rounds to converge, more than 1000 epochs, while DiT does not. | Similarity | DiT | MLP | CGAN | |---| --- | --- | --- | | Task 1 | 4.10 | 153.52 | 79.63 | | Task 2 | 2.32 | 139.46 | 75.49 | **W4:** Detailed ablation study on model fine-tuning. **For W4:** We sincerely thank you for your insightful feedback. We evaluate the TSFM with local LoRA fine-tuning, and the results are in the following table. We also add the existing results from Table 5, which includes ours without LoRA generation, and our method. It can be seen that training TSFM on local data could achieve superior performance in most cases. While our generated LoRA weights could also provide a comparable improvement to direct training. Without training or generated weights, the pre-trained TSFM is hard to fit the specific domain knowledge well. | Method | Sleep-EDF | MIT-BIH | FoG |---| --- | --- | --- | | TSFM with local training | 87.15 | 88.59 | 83.71 | | Ours w/o local LoRA | 82.08 |84.17| 76.45 | | Ours | 86.39| 89.94 | 81.32 | **W5:** Typo and presentations. **For W5:** We appreciate the reviewer's keen eye and thorough review. We have carefully revised the typos and will provide a more rigorous expression of Figure 3 in the revised version of our work. **W6:** Suggestions on related work. **For W6:** Thanks for your constructive suggestions. Due to the length limit, we have to adopt the current organization. It is truly significant to place the related work section in the main body for better understanding by the readers. We promise to reorganize it. We will also be adding more extensive related work, such as privacy protection-related work.
Summary: The paper provides a personalized approach to transfer the time series foundation model to clinical physiological signal tasks. The main constraints are the lower computing costs and privacy. Claims And Evidence: Not always. A main issue is that it is not clear how the authors are addressing the privacy protection in the paper. The paper would have benefitted from a clear positioning within the related literature. See comments bellow for more details. Methods And Evaluation Criteria: Yes Theoretical Claims: There is no proof given for Proposition 3.1. Experimental Designs Or Analyses: The experimental designs and the conducted analysis seem to be acceptable, and the ablation study demonstrates the relevance of the chosen techniques. Supplementary Material: I skimmed through the supplementary material. Relation To Broader Scientific Literature: The paper roughly integrates several concepts within a single foundation model to pre-train/fine-tune it. This paper provides some interesting results, and the ablation study confirms the relevance of the chosen techniques, such as low-rank adapter and neural collapse. Essential References Not Discussed: There are some missing related work. Since this paper seeks to address privacy issues, it would have been relevant to include some related work on this topic. Other Strengths And Weaknesses: A major issue in this paper is the lack of clear description of the contributions within the privacy constraint. The paper states that its major contribution is privacy preserving. However, there is no much information on the privacy, and how it is clearly preserved. Moreover, the paper does not cite any related work from the privacy preserving literature. It is not clear from the paper that Definition 2.1 is a contribution or not. Moreover, the paper and appendix do not present the proof of Proposition 3.1. It is not clear how it was established. The two main ingredients in this work are (i) to train the foundation model with massive public physiological time series by using a low-rank adapter, and (ii) to use a neural collapse in order to combat the imbalance in the data distribution. Moreover, a diffusion transformer is used as a robust generator to synthesize the low-rank adapter weights. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate for your comprehensive and valuable review, particularly given the meticulous standards during the review of this venue. We appreciate the opportunity to address your concerns. **W1:** Clarification on the privacy issue. **For W1:** We apologize for missing a detailed discussion on privacy preservation. Privacy constraint is one of our contributions. A generic pre-trained time series foundation model is hard to fit the diverse local tasks and data. Uploading patients’ data to the cloud for robust pretraining may raise privacy concerns. Our proposed method achieves this goal by data isolation, that is, sensitive patients’ data are preserved locally. Due to the removal of data exposure, privacy issues could be solved. The most related work includes some time series foundation models and federated learning. Brant-X [1] adopts the EEG foundation model Brant-2 as a basis, which is pre-trained on 4TB of private brain signal data. This approach requires private data and large-scale pre-training on it. If the medical entity has sufficient computing resources, it could be reasonable, otherwise, it may need to transfer private data to a robust cloud. On the other hand, federated learning preserves privacy by exchanging models, not data, which is similar to our paradigm. [2] considers using FL to train a foundation model for medical time series. Multiple rounds of exchange on the large models may lead to remarkable communication costs, and models may be attacked by model poisoning attacks from history gradient updates. While our work focuses on the personalized TSFM with a privacy guarantee. Due to the length limitation of the rebuttal, we promise to add a more comprehensive privacy-preserving literature on related work and our method. [1] Zhang, Daoze, et al. "Brant-X: A Unified Physiological Signal Alignment Framework." KDD 2024. [2] Ali, Mahad, et al. "Fine-Tuning Foundation Models with Federated Learning for Privacy Preserving Medical Time Series Forecasting." IEEE EMBC 2025. **W2:** Definition 2.1 and Proposition 3.1. **For W2:** Thanks for your valuable comments. Definition 2.1 follows the standard definition of neural collapse. It is not a contribution, but an illustration. The proof for Proposition 3.1 is given as: $\textit{Proof.}$ Suppose $C$ class prototype vectors $p_1, p_2, \dots, p_C \in \mathbb{R}^d$ satisfy: Unitization: $|p_i| = 1$, symmetric inner product constraint: $p_i^T p_j = -\frac{1}{C-1}, \forall i \neq j$. Define the Gram matrix $G \in \mathbb{R}^{C \times C}$, where: $$ G_{ij} = p_i^T p_j = \begin{cases} 1 & \text{if } i = j, \newline -\frac{1}{C-1} & \text{if } i \neq j. \end{cases} $$ The matrix has the following properties: the diagonal elements are 1 and the off-diagonal elements are $-\frac{1}{C-1}$. The matrix rank is $d$, and it is a symmetric semi-positive matrix (because the vector is in d-dimensional space). The above Gram matrix corresponds to the Simplex Equiangular Tight Frame. Its core features are: all off-diagonal elements are equal, that is, the angles between vectors are consistent, the vectors are evenly distributed in the feature space, and the minimum interval between classes is maximized. This structure is the only configuration that satisfies symmetry and minimizes the similarity between classes. The angle between any two vectors $\theta$ satisfies: $$ \cos\theta = p_i^T p_j = -\frac{1}{C-1}. $$ In classification problems, the decision boundary is determined by the geometric relationship of the prototype vectors. For a linear classifier, the decision boundary for two categories $i$ and $j$ is: $$ x \in \mathbb{R}^d \mid (p_i - p_j)^T x + \frac{\|p_j\|^2 - \|p_i\|^2}{2} = 0. $$ Since $\|p_i\| = \|p_j\| = 1$, the boundary is simplified to: $$ (p_i - p_j)^T x = 0. $$ The margin $\gamma$ between the two boundaries is: $$ \gamma= \frac{2}{\|p_i - p_j\|}. $$ Calculate $\|p_i - p_j\|^2 = 2(1 - p_i^T p_j) = 2\left(1 + \frac{1}{C-1}\right)$, so: $$ \gamma = \frac{2}{\sqrt{2\left(1 + \frac{1}{C-1}\right)}} = \sqrt{\frac{2(C-1)}{C}}. $$ When all class prototypes meet the symmetry condition, the intervals between all classes are consistent and reach the maximum possible value, so the decision boundary is optimal. We will add this proof to the appendix in the revised manuscripts. --- Rebuttal Comment 1.1: Comment: Dear authors, I would like to thank you for your feedback. However, the rebuttal is strengthening my major concerns that this work is not well positioned in the literature, mainly federated learning literature. There are many papers that address federated learning for time series, such as (and many papers more recent): - Zhuang, W., Chen, C., & Lyu, L. (2023). When foundation model meets federated learning: Motivations, challenges, and future directions. arXiv preprint arXiv:2306.15546. For healthcare, see for instance the following paper and references within: - He, Y., Huang, F., Jiang, X., Nie, Y., Wang, M., Wang, J., & Chen, H. (2024). Foundation model for advancing healthcare: challenges, opportunities and future directions. IEEE Reviews in Biomedical Engineering. For all these reasons, and taking into account the feedback provided, as well as the other reviews, I will maintain my scores. --- Reply to Comment 1.1.1: Comment: Dear Reviewer FR4e, Thanks for your response. We apologize for making you feel that not well-positioning this work in the literature. We would like to kindly clarify the relation to federated learning. Our work considers a cloud-edge scenario, where a cloud (cloud computing platform or AI company) with sufficient computing resources. It is able to train and provide pre-trained foundation models with public general time series data and physiological signals. Considering the clinical site as an edge, we aim to design a lightweight TSFM adaptation approach to personalize the received TSFM without exposing local patient data. For federated learning, it considers a collaborative training scenario with a cloud server and multiple clients. In FL, each local client is able to train the model. While fine-tuning a foundation model in the client is still costly, even with LoRA or other techniques. Many works are also exploring the effectiveness of exchanging LoRA, where direct exchange has performance loss compared to the original entire model. Multiple communication requires extra cost and is vulnerable to model poisoning attacks. Therefore, federated learning with foundation models tries to address the mentioned challenges, which are different from ours. In short, we consider transforming generic TSFM to personalized in a local training-free view, while FL focuses on collaborative training effectively and privacy-preserving. We regard them are orthogonal but somehow related in a privacy view. We sincerely thank you for your constructive suggestions. We promise to try our best to improve the related work literature and position of the privacy issue to be clearer in revising the manuscript. Our main contribution still lies in the techniques of efficient weight generation to boost the large model's lightweight personalization. Best regards, Authors
Summary: This paper proposes a novel approach to achieve efficient adaptation for physiological signal foundation modals to private datasets. The main idea is to use Low-rank Adaptation (LoRA). However, unlike existing methods that train LoRA weights for adaptation, it utilizes a diffusion model to generate the LoRA weights. To this end, the paper first prepared 30 datasets and obtained LoRA weightings for them. They are then used to train a diffusion model with a Transformer as the backbone. The shapelet prototypes are extracted from the datasets and used as conditions in the diffusion generation. The trained diffusion model can then be used to generate a LoRA weight for private data. Empirical evaluations are performed on four typical physiological classification tasks. Claims And Evidence: The paper claims that the proposed PhysioPFM is more robust and efficient than existing approaches utilizing a time series foundation model for physiological signal tasks. The claim is well supported by the empirical evaluations. Comparison with recent SOTA methods indicates the effectiveness of the proposed approach. The efficiency comparison also shows that the proposed PhysioPFM is more efficient in terms of memory consumption and adaption time. Methods And Evaluation Criteria: The proposed method is technically sound. The use of a diffusion model to generate LoRA weight is quite interesting and novel. The benchmark datasets used are appropriate, and the evaluation metrics are also suitable. Theoretical Claims: The Proposition 3.1 follows directly from Definition 2.1. No other theoretical claims are made. Experimental Designs Or Analyses: The experimental settings are sound. Sufficient details of the experimental design are included, and the baselines compared are appropriate. The evaluation metrics used are suitable. An ablation study is also performed. Supplementary Material: The appendix contains details of the shapelet discovery steps and the detailed introduction of the downstream tasks as well as the datasets used to train the diffusion model. Relation To Broader Scientific Literature: Fine-tuning foundation models using LoRA has been intensively explored in the literature. However, using a generative model to generate LoRA weights is a novel approach. The framework overall is similar to the hypernetwork approach [1]. The authors should also discuss the connections with the hypernetwork methods. [1] Chauhan, V.K., Zhou, J., Lu, P., Molaei, S. and Clifton, D.A., 2024. A brief review of hypernetworks in deep learning. Artificial Intelligence Review, 57(9), p.250. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The proposed method of generating LoRA weights is novel and interesting. - The paper is organized, well presented, and easy to follow. - The experiments are well executed, and the results are pretty promising. Weaknesses: - The diffusion model takes only the shaplets as a condition to generate LoRA weights. This seems to be a bit restrictive for multi-task scenarios since the same LoRA weights will be generated for different tasks given the same physiological signal input. - The diffusion model is trained using only 30 public physiological signal datasets; the impact of the number of datasets used in training the diffusion model is unclear. Other Comments Or Suggestions: Please see my comments above. Questions For Authors: Usually, the inference of diffusion models is quite time-consuming. Does the adaptation time reported in Fig.5 of PhysioPFM correspond to the time needed for the diffusion model to generate the LoRA weight? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for recognizing the value of our work. We are grateful for your thorough feedback, especially considering the massive requirements of the review. We hope the following comments could address your questions: **W1:** Relation to hypernetwork. **For W1:** In a macro sense, our method and the hypernetwork both provide adaptive parameters for another backbone network. Specifically speaking, according to Figure 1(b) of the reference, the hypernetwork also needs to be trained together with the main model, calculate the performance loss of the main model, and perform gradient backward propagation on the hypernetwork. Unlike the hypernetwork, we do not update the hypernetwork based on the performance of the backbone model, which is costly in our context. Our paper directly maps the data feature space to the parameter space, aiming to establish a mapping between LoRA parameters and data features. Our method pursues better generation quality and aims to learn an effective and representative latent space. This is logically different from the traditional hypernetwork and is designed to meet the challenges of our scenario to achieve local training-free adaptation. In general, our method has similar macroscopic goals to hypernetworks, but their specific implementations are quite different. We will include a discussion of this aspect in the main text and provide more detailed related work. **W2:** Discussion of applying to multi-task scenarios. **For W2:** Thank you for raising this important concern. We focus on general physiological signals classification in this work. When the target task changes, e.g. transferring to a time series forecasting task, our approach may need to adjust the pre-training on the generator to build a new ability for new tasks. Another possible way is to expand the type of input condition, which could be a significant future work to extend our methods to multi-task scenarios. **W3:** The impact of training sample size for the generator. **For W3:** We kindly note that we have included the impact of training sample size on the generator in Section 4.3, Impact of training samples. To fully enhance the ability of the generator, we adopt multiple ways to expand the training sets in the data preparation phase, including partial dataset, partial class, and random selection of subjects. Given the different input of data, the generalized ability of the generator could be more diversified. **W4:** Definition of the adaptation time. **For W4:** We apologize for any confusion caused. For our proposed method, Figure 5 includes the time to generate LoRA weights and TSFM inference. For baselines, the adaptation time refers to the local training time. Because our method does not need to fine-tune the foundation model, while the baselines need to train and update the model. Backpropagation on a large model is time-consuming. We only have a single-step inference to generate LoRA, which takes little time compared to multiple-round training.
null
null
null
null
null
null
Hierarchical Overlapping Clustering on Graphs: Cost Function, Algorithm and Scalability
Accept (poster)
Summary: This paper studies hierarchical overlapping clustering (HOC), in which vertices are assigned to a hierarchical structure of overlapping clusters. In comparison with non-overlapping HC, we construct a DAG rather than an HC tree. The paper introduces an objective function for this problem, generalising Dasgupta's cost function for HC, and gives a constant-factor approximation algorithm for the dual objective. Finally, the paper includes some experimental evaluation and compares the algorithm with non-hierarchical overlapping clustering algorithms, finding that the new algorithm has a faster running time. Claims And Evidence: The paper claims three contributions: 1. The introduction of a new cost function for hierarchical overlapping clustering. This is a new problem, not previously studied and so the introduction of such a cost function is quite original. The given objective function extends Dasgupta's cost in a sensible way to the case with overlapping clusters. 2. An approximation algorithm for the dual of the proposed cost function. This is theoretically justified and the claim stands as stated. However, I feel that it is not particularly significant given that the approximation is based on universal bound on the cost of any output produced by the proposed algorithm. That is, there is no theoretical proof that the algorithm performs better on instances with a better optimal cost. 3. The scalability of the proposed algorithm is demonstrated through empirical evaluation. The evaluation justifies the claim, although the tested datasets are of relatively small size (up to 10000) and so it is not clear that the proposed algorithm applies in practical scenarios. Methods And Evaluation Criteria: My only concern with the evaluation criteria is the size of the tested datasets. It would be interesting to see if the proposed algorithm could scale to 100,000 vertices, or more. Theoretical Claims: The theoretical claims appear sound, although I did not check the proofs in detail. Experimental Designs Or Analyses: See the answers above. Supplementary Material: I did not check the proofs in the supplementary material in detail. Relation To Broader Scientific Literature: The results build on the cost function for hierarchical clustering proposed by Dasgupta, and generalise this in a reasonable way to the problem of detecting overlapping clusters, which has been studies only in the non-hierarchical setting. Essential References Not Discussed: No. Other Strengths And Weaknesses: I feel that the proposed cost function is interesting, but the key weakness of the paper is in the theoretical results. It would be interesting to prove a guarantee on the performance of the algorithm that depends on the optimal cost function of a specific graph. Edit: Based on the responses of the authors, I have raised my score. Other Comments Or Suggestions: The text in figure 2 is unreadable - it would be better to make it bigger. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the careful review and valuable comments. Let us address the concerns one by one. 1. On the scalability of our algorithm, we have demonstrated it on graphs of size 100,000. Please note that the largest graph in the second row of Figure 2 has size $10\times 10^4=10^5$ (sorry for the tiny text, we will amplify it in our updated version). We did not scale to larger graphs (e.g., 500,000, or 1,000,000) not because our algorithm cannot bear the large scales, but only because the baseline methods cannot, which results in lacking comparison objects for us. That is also why, as shown in Table 1, in our PC environment, we cannot compare with any baseline on large real-world datasets, but just plant in the last column the results from (Orecchia et al., 2022) whose operating environment includes a cluster of machines. 2. Regarding the weak theoretical results, we think that as the first step on the hierarchical overlapping clustering (HOC) study, a constant approximation factor (although not very large) for the dual $k$-HOC problem is not so bad. As a by-product of our theory, our study on 2-OC (which is ever one of the central topics on overlapping clustering before our work) achieves theoretical guarantees on both primal and dual problems. The main obstacle of getting a better guarantee for $k$-HOC stems from the complicated hybrid structure during the recursions of the 2-OC algorithm. Further investigating this process could be an interesting next step on $k$-HOC study. We believe that there is better guarantee since the actual performance of our $k$-HOC as shown in Figure 2 is almost perfect on synthetic datasets. We thank the reviewer again and are happy to address any remaining concerns. --- Rebuttal Comment 1.1: Comment: Thanks for your responses - I realise that I missed some of the larger experiments and I will increase my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for helping to raise the score. We truly appreciate your support.
Summary: The paper formally introduces the problem of hierarchical overlapping clustering. Overlapping and hierarchical clusterings have been studied more extensively separately. The only preexisting works that have studied them together have been in the distance setting (edge weights are distances) with no formal objectives and guarantees. This paper provides those formalities in the similarity setting (edge weights are similarities). In hierarchical clustering, clusters are nested into clusters, creating a tiered structure of clusters which can be represented as a tree (the root = the set of all data, the leaves = individual data, and intermediate nodes = clusters). With overlaps, any cluster can have partial belonging to another cluster up the hierarchy. The total belonging (sum over belongings of a single cluster to all other clusters) is, naturally, 1. It has a nice probabilistic interpretation: If I 1/2 belong to one cluster, then maybe I flip a coin to determine my belonging. The hierarchies of belongings are designed such that these probabilities are preserved (if I 1/2 belong to X and 1/2 to Y, and X 1/2 belongs to Z but Y doesn't belong to Z, then I 1/4 belong to Z). Their objective function directly extends Dasgupta's cost objective, perhaps the most famous and natural hierarchical clustering objective. Effectively, this objective deems that highly similar points should be tightly packed into small clusters. In the overlapping setting, partial belongings are accounted for. A minimum common ancestor of two points is defined as a cluster containing both points whose children contain at most one of the points each. In Dasgupta's setting, this would be the size of the smallest cluster containing the points, but there may be multiple "MCAs" with overlaps. With Dasgupta, the cost contribution of $(x, y)$ is $w(x, y)$ scaled by the size of the MCA, but in this paper, it's a convex combination of the sizes of the many MCAs (again, with a nice probabilistic interpretation). They also define the dual to this objective, which is akin to the work of Moseley and Wang in the hierarchical (non-overlapping) setting. The authors then propose two algorithms. The first is a simple local search algorithm for the problem of (non-hierarchical) overlapping-clustering with only 2 overlapping clusters allowed. While this problem has been studied before, it is new in the context of how their objective simplifies to this setting. The algorithm achieves an O(1)-approximation for the dual and an O(d_ratio)-approximation for the original problem, where d_ratio is the maximum degree divided by the average degree. The second algorithm solves the general problem when the width of the hierarchy is bounded by k (except at the leaves). This shown only to approximate the dual, but it does so with the same factor as the previous algorithm and on the more general case. Finally, they run the latter algorithm against a set of baseline algorithms. This includes one of the previous papers on hierarchical overlapping clustering. The algorithms are run on data generated by an overlapping stochastic block model and on real snap datasets. The latter does not have ground truths for hierarchical clustering, so they only test runtime. On synthetic data, they show that output much better clusterings (as measured by NMI, which they don't define). On all datasets, they show their algorithm is acceptably scalable. Claims And Evidence: Yes, they are substantiated by accompanied proofs in the appendix as well as formal experiments. I would say the experiments are somewhat limited in scope, partly due to the difficulty of finding real datasets for hierarchical clusterings with ground truth. However: why not compute your proposed dual objective scores for all tested datasets and see how the algorithms compare? Methods And Evaluation Criteria: Yes, though more testing could have been done (see my question about more options). Theoretical Claims: I did not verify the correctness of proofs in the appendix, but some of them were plain to see how they worked and the others are certainly plausible. Proofs were mentioned to be in the appendix wherever applicable. Experimental Designs Or Analyses: The experimental designed seemed okay to me, though, as mentioned, there may be more tests they could run. Supplementary Material: I did not. Relation To Broader Scientific Literature: This is the first proposal of a formal objective for hierarchical overlapping clustering. It is notably defined off a seminal work by Dasgupta on regular hierarchical clustering. This is certainly an interesting area that I am surprised has not been studied this rigorously before. It could certainly open up an entirely new niche of study, though I do have qualms with their formulation (discussed later). Essential References Not Discussed: None that are essential. Other Strengths And Weaknesses: Strengths 1. Very interesting niche that should be studied 2. Nice extensions of seminal works by Dasgupta and Moseley & Wang 3. Shows some nice simple algorithms for this problem 4. Solid set of experiments Weaknesses 1. I have significant qualms with the proposed model, all of which can be found in the "Questions" section 2. Algorithm 1 is extremely limited (flat overlapping clustering, only 2 clusters). It doesn't even consider the hierarchical aspect, which seems central to this paper. That being said, it is interesting in useful, but it's the only algorithm they propose that approximates their proposed objective as opposed to it's dual, and it's rather disappointing in its limitation. 3. Experiments could be extended Ultimately, I believe this paper should be published since it initiates an interesting field of study. It also seems to do it in the "right way" with its foundations in the Dasgupta paper, however, the formulation and results are weak. I lean towards acceptance, but am not sure if this truly meets the bar for ICML. Other Comments Or Suggestions: I basically formulated these in the "Questions" section. There are typos/weird phrasings around, but I did not write them down. It doesn't significantly impede comprehension. Questions For Authors: 1. Are (3) and (4) necessary for Property 2.6? I would think they would be implied by the fact that S is an anti-chain and N in S is ordered between X and Y. - EDIT:I see (1) and (2) don't cover the "maximality" aspect, but can't you just say it's a maximal set that satisfies those two properties? I think that would be much more natural 2. Along the same lines, shouldn't the statement be "For two nodes X and Y... for any node set S..." Instead of "if" there is a node set S? There would only not exist such a set if they were parent/child right? 3. Consider the example from your paper Fig 1(b). Here, c is assigned 1/2 to both N1 and N2. Am I correct to say that there is only one MCA of (b, c), which is N1, and only one MCA of (c, d), which is N2? Then, the contribution of cost from these edges are just $2w(b, c)$ and $2w(c, d)$ respectively, which is the minimum contribution possible. This feels like cheating to me - shouldn't the partial assignment of c to N2 hurt how much (b, c) contributes to the cost? For instance, say we removed the edges (a, c) and (b, c) from the graph 1(a) entirely. Shouldn't there be some negative impact on the cost, now that c is partially assigned to N1 and N2? It SHOULD just be assigned to N1, but it doesn't seem like a loss to add another assignment in this case. This doesn't seem like a desirable property. I think this issue is something you're hiding behind the example 1(d). You're trying to say there IS a cost to having too many assignments, but that only happens when pairs of vertices are given multiple overlapping assignments unnecessarily, since it increases the number of MCAs there are for an edge. Ultimately, I think my problem here is how you've defined MCA: it feels like N1 AND R should both be partially MCAs of (b, c) in 1(b), since c's assigned partially to N1 and N2. Note: The issues here are not resolved by bounding the width of the graph. 4. You claim that the longest anti-chain blocks all paths from leaves to root. Is this actually true? Consider the HC tree which is a leafy stick (a and b merge, then they merge with c, then d, ...). The longest anti-chain here is just a single node! Unless, that is, if you include leaves in the anti-chains (but they aren't included in the width, so it seems weird...) You might want to require that there is some anti-chain that DOES do this (e.g., the children of the nodes in the chain consist of precisely the set of leaves). - EDIT: Is this actually covered by condition (3) in definition 2.1? I may have just missed this 5. Dual HOC: Can you briefly mention the Moseley and Wang dual when you define this on page 5? Since this is the corresponding problem in the HOC setting. 6. In Proposition 3.2, you talk about using approximation methods based off the max potential dual, n*w(e). In HC, Moseley and Wang did a lot of analysis with this max potential dual, and there are similar limitations (I think it's 1/3 or 1/2). It would be great to find where this happens and cite it here! I can't remember off the top of my head . 7. Can you give some intuition as to why the algorithm approximation for the primal is dependent on the degree ratios? 8. I don't know much about stochastic block models - are they hierarchical in structure? I would think you'd want a hierarchical one to show what happens when you're not just representing a flat clustering. 9. What is NMI? 10. Why not compute your proposed dual objective scores for all tested datasets and see how the algorithms compare? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful review and valuable comments. Let us address the concerns one by one. Q1: Yes, these four conditions of Property 2.6 can be simplified since (3) implies that every $N\in S$ is ordered between $X$ and $Y$. However, we cannot say that it's a maximal set that satisfies (1) and (2) only since (4) is necessary (this also relates to Q4, and sorry for inaccurate statement therein). Please refer to the toy example in Figure 3, Appendix A.2. Nodes $N_1$ and $N_5$ form a maximal anti-chain between $b$ and $R$ satisfying (1) and (2), and the belonging factors of $b$ to them are $1/2$ and $1/4$, respectively. But $b$, $N_1$ and $N_5$ all belong totally to $R$, which makes Property 2.6 abort (rhs=3/4). The reason is that the path $(b, N_2, N_4, R)$ leaks even though $\{N_1, N_5\}$ is a maximal anti-chain, while (4) blocks this leak. Q2: Yes, the condition after “if” is quite natural, “for any node set S” is better. Q3: Note that in Fig 1(b), the contribution of cost from $(b,c)$ and $(c,d)$ are $3w(b,c)$ and $3w(c,d)$, respectively, since $|N_1|=|N_2|=3$. By our definition, indeed, the partial assignment of $c$ to $N_2$ doesn’t hurt the cost that $(b, c)$ contributes to. This is because $(b, c)$ belongs entirely only to $N_1$. When $(a, c)$ and $(b, c)$ are removed from $G$, the cost of any HOC will certainly decrease since the edge number becomes less. But now the optimal HOC should be of shape $D_2$ (exchanging labels of $a, b$ and $d, e$) since the MCA of $(a,b)$ has size only $2$. In this case, $D_1$ indeed has negative impact on the cost when compared to $D_2$. This is quite natural and desirable, isn’t it? (Maybe you just have some misunderstanding in calculating the cost of $D_1$). As for $D_3$, we indeed want to say there is a cost to having too many assignments. But this cost mainly comes from enlarging the clusters themselves after excessive assignments. As long as you agree with our setting that a cluster can be treated as an MCA of an edge only if the edge is entirely contained in the cluster, then everything will be easy to understand. This setting is consistent with Dasgupta’s cost for HC. For example, In Fig 1(c), we do not treat $N_2$ as the MCA of $(c, d)$ even though $d$ belongs to $N_2$. Note that although the assignment of an endpoint of an edge does not impact on a cluster that is not an MCA, it does impact on the belonging factors to all MCAs. As for your doubt that $N_1$ and $R$ should both be partially MCAs of $(b, c)$ in Fig. 1(b), now that we have defined MINIMAL common ancestor, the partial order between two CAs $N_1$ and $R$ should be avoided. Q4: The statement here is indeed not accurate. Our original intention is to point out the analogue of significance between the longest anti-chains on HC and HOC. But as shown by the instance in our response to Q1, not any maximal anti-chain necessarily blocks all paths from leaves to the root. We believe that a slight modification of this instance will give evidence to refute a longest anti-chain also. The leaf stick that you propose is another counter example and does not violate condition (3) in Def. 2.1. We can only say that a longest anti-chain is located, intuitively, as far down from the root as possible to block leaf-to-root paths. Thanks for pointing out this. Q5: Sure. We will mention Moseley and Wang’s dual for HC when we define $k$-HOC dual on Page 5. Q6: Yes, Moseley and Wang proved a similar limitation of $1/3$ off the trivial upper bound $n*w(e)$ for the binary HC clustering. We will briefly mention it in the discussion of Th. 3.1 in our updated version. (Since there is a strict limit 5000 on the character number, we continue our response in the rebuttal to Reviewer 3QxV on the top.) --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I am particularly happy with your response to Q3 (though I still think the fact that R isn't considered an MCA isn't theoretically "nice"). The theoretical results of the algorithm are still weak, but I think this paper is much more holistic than the results of the algorithm. Therefore, I think that the paper deserves acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for your recognition for our work. It is quite normal that different people have different opinions. We are happy to see multi-views of hierarchical overlapping clustering, which also sparks our new thinking to this problem. Thank you for your careful reading and valuable comments again.
Summary: This work introduces and studies the hierarchical overlapping clustering (HOC) problem. In the clustering literature, many works have focussed on either (i) overlapping or (ii) hierarchical clustering; this work's aim is to reconcile both topics. As a first contribution -- inspired by the well-known Dasgupta cost function -- this paper introduces a cost function for overlapping hierarchical clustering. The proposed cost function has serveral desirable properties such as compatibility, additivity, and binary optimality. As a second contribution, the paper further proposes approximation algorithms for the dual and primal variants of the new cost function. These approximation algorithms hold for a restricted variant of the HOC problem, namely the $k$-HOC problem and the $2$-HOC problem. As a final contribution, the algorithm is tested on synthetic and real-world dataset. To speed up the algorithm, the paper uses local search heuristics. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I checked them. Experimental Designs Or Analyses: The experiment section is slightly discoupled from the main theoretical body of the work **1** The proposed algorithms are rather slow ($O(n^4)$ factor in the running time) which prohibits the algorithm from being applied at a large scale. This requries the paper to introduce local search heuristics to make the algorithms implementable. As such, Algorithm 2 is not 'formally' compared. **2** $k$-HOC experiments are only performed on synthetic data, and it is not clear to me how some metrics (such as NMI) are computed for overlapping settings for both methods compared. **3** No qualitatative results on real-world datasets; the real-world datasets are only used for scalability experiments. There are experiments on MNIST in the appendix, however the NMI in those experiments is calculated using non-overlapping clusters. Supplementary Material: I checked the code, and it looks good. I did not find any issues with the implementation. Relation To Broader Scientific Literature: The paper effectively positions its contributions within existing literature on hierarchical and overlapping clustering, referencing Dasgupta (2016), Orecchia et al. (2022), and other foundational works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **S1)** The paper studies a new important problem, that is worth to study. The problem formulation itself is quite intuitive, and a natural way to combine the hierarchical and overlapping clustering problems. This work could potentially lay the groundwork for future algorithms to be developed on. Throughout the main body and appendix the paper also states and proves intuitive properties of the cost function, which is a nice contribution. **S2)** The paper also introduces 2 approximation algorithms for the new objective function, one for the $2$-OC problem, and another $k$-HOC problem. These algorithms could provide a good starting point for future work on HOC. **S3)** Finally, the theory for the cost function and the proofs for the approximation guarantees seem sound to me - and non-trivial for the most part. **W1)** The approximation guarantees are given on fairly restricted settings; for the primal and dual variants of $2$-OC and $k$-HOC. the $2$-OC problem studies the overlapping bipartition problem, and contains no hierarchical structure. The $k$-HOC problem does have a hierarchical component, but restricts itself to at most $k$ clusters/nodes. The approximation guarantee on the latter is only given on the dual variant of the problem (which in general is a bit easier to prove, like the dual of Dasgupta's cost function (Mosely & Wang, 2017)). It seems that the approximation guarantee follows rather directly from the approximation guarantee of the dual of $2$-OC. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the careful review and valuable comments. Let us address the concerns one by one. 1. Regarding the speed-up version that makes the comparison of Algorithm 2 not “formal”, let’s look at the two strategies. The first is a good initialization with two non-overlapping clusters and using “move” instead of “exchange” during local search. We don’t think this harms the results of Algorithm 2 much for the following reason. Let’s consider another process as follows. We proceed Algorithm 2 rigorously first until the exchange-based local search gets stuck. By Theorem 3.6, we get our approximation guarantee $\frac{2}{3\sqrt{6}}-\Theta(\frac{1+\epsilon}{n})$. Then we go on to move nodes following the "move" strategy. Since we move nodes only if the cost gets better, when all nodes get stuck, we have no worse cost than that we have from Algorithm 2, or say, this is also an approximation algorithm with the same factor $\frac{2}{3\sqrt{6}}-\Theta(\frac{1+\epsilon}{n})$. This looks like we have started our local search from a good overlapping initial clustering. The only difference of our strategy is that, in practice, we start our local search from a good non-overlapping clustering, which doesn’t seem like to be as good. However, the almost perfect NMI results in Figure 2 demonstrate that even this strategy that should not seem so good is not bad at all. The second strategy is batch migration. In Algorithm 2, the nodes are exchanged one by one until get stuck. Now we move more than one node in each iteration. This strategy makes no difference in effectiveness from the original algorithm, because the final state of both process is “getting stuck”. Since the approximation guarantee holds for any initialization, any state that makes all nodes stuck can be viewed as a terminating state with this guarantee. So even Algorithm 2 itself can use batch migration without any loss in approximation guarantee. 2. The definition of NMI metric for overlapping clustering has been provided in Appendix C.3. We have also prompted its place in the paragraph of “Datasets and evaluation” (Page 7) in the main text. 3. Regarding no qualitative results on real-world datasets, we evaluate the effectiveness of our algorithm on synthetic datasets. Due to lack of ground truth on real-world datasets, we are not able to verify the quality of our clustering on them. Nevertheless, we can compute the objective scores as the Reviewer 4xK8 suggested (Q10 therein). However, there is still no way for us to compare it with the baseline methods since no baseline can terminate within a reasonable time on large graphs when run on a personal computer. This also demonstrates the priority of our algorithm in scalability. 4. Regarding W1, we think that 2-OC is the foundation of $k$-HOC, just like that bipartition is the foundation of approximate algorithms of hierarchical clustering. We have paid much attention to 2-OC for both dual and primal versions, and achieved approximation guarantees. Yes, we have not achieved an approximation guarantee for the primal $k$-HOC problem since we have not found a proper way to analyze recursions of 2-OC. We think primal $k$-HOC needs more novel insights and is an interesting open problem. We thank the reviewer again and are happy to address any remaining concerns.
Summary: Two variants of graph clustering are Hierarchical Clustering and Overlapping Clustering. While there are some studies of both variants, they were not previously considered simultaneously. The paper proposes a reasonable cost function that combines both variants, and investigates algorithms that minimize this cost function. Claims And Evidence: Claim 1: the proposed cost function for HOC makes sense. The paper supports this claim by showing that it reduces to previous variants for Hierarchical clustering trees. It also discusses additional properties that supports the intuitive meaning of this newly proposed cost function. Claim 2: There are effective algorithms for minimizing the proposed cost function. The paper proposes such algorithms, and experimentally verifies their performance. Methods And Evaluation Criteria: The main tool for the algorithm is local search. Theoretical Claims: Partially Experimental Designs Or Analyses: Experiments look fine. I also like the toy examples in the appendix. Supplementary Material: No. Relation To Broader Scientific Literature: Yes, to the best of my knowledge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: It is an interesting result that may encourage others to look at both hierarchical and overlapping graph clustering models. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and the recognition of our work. We hope that our study will encourage other researchers to pay attention to hierarchical and overlapping graph clustering, since we think this hybrid structure has a great potential significance in recognizing real-world data organizations. ============================================ (Sorry for occupying this space for the response to Reviewer 4xK8 due to the strict limit on character number.) Q7: In our proof of Th. 3.3, we focus on the ratio $cost^*_D/cost^*_P$ (Lemma B.4). We want to give an upper bound on this ratio such that proper bounds in terms of $n\cdot w(E)$ can be given to both of them ($cost^*_D$ is far from $n\cdot w(E)$, while $cost^*_P$ is far from 0, the farther the better). Since these two costs correspond to the same 2-OC partitioning, we consider the bilinear forms of both the primal and the dual versions of the cost, that is, bilinear combination of the total edge weights within clusters and the corresponding cluster sizes. Note that the total edge weights within a cluster is the multiplication of cluster size and density. When divided by a size term on both numerator and denominator, this ratio becomes related to density (although there is a size term multiplied on each density). Now the density of the densest subgraph can cover this ratio. Since density is closely related to node degree, we turn the density ratio to degree ratio for better comprehension. When the degree distribution varies wildly, the degree ratio $d max/d avg$ is not a good proxy of density ratio, and the bound in Th. 3.3 is not so good. But when the degree ratio is close to $1$, these two ratios get close, and now the cluster density of each cluster becomes uniform and the cost is mainly determined by the cluster sizes. Now the primal cost is roughly the sum of squares of two cluster sizes (ignoring the bad cut edges outside), while the dual cost is roughly twice of the product of two cluster sizes. This yields a high-quality upper bound for $cost^*_D/cost^*_P$. Q8: The stochastic block model (SBM) is not hierarchical. It has two variants, hierarchical SBM (HSBM) and overlapping SBM (OSBM), but there has been no variant like HOSBM yet, since an HOSBM, in our opinion, should be built on a proper formulation of HOC graph, just like what we propose in our submission. However, please note that any variant of SBM can be attributed to a (flat) stochastic adjacency matrix, in which the probability can be specifically set for each pair of nodes after calculation. In our settings, the hierarchical and overlapping features have been implied by $p_1, p_2, p_3$ that imply hierarchies and by $1-(1-p_3)^2$ that implies the density within overlapping areas (please refer to the 2nd paragraph of Appendix C.2). Q9: The formal definition of NMI for overlapping clustering has been given in Appendix C.3. It is a natural generalization of the classic NMI for non-overlapping partition. Q10: In Fig. 2, we have demonstrated the cost results for all synthetic datasets (see the 2nd column for both levels). For real datasets, since no baseline method can deal with the large graphs listed in Table 1 (with our computing environment of PC), we have no object to compare with. So we didn’t list the costs. However, the effectiveness of our algorithm has been evaluated in Fig. 1. Nevertheless, we have calculated the primal costs of the four networks in Table 1, those are $1.35E11$, $1.63E12$, $1.43E11$ and $1.65E10$ for Amazon, Youtube, DBLP-all and DBLP-cm, respectively. After all, regarding to W1 on the significance of Algorithm 1, please refer to our response to Reviewer 7SG8, the second item. Hope we have addressed all the concerns. If there are any remaining questions, we are happy to address them.
null
null
null
null
null
null
Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach
Accept (poster)
Summary: The paper points out the problem of overconfidence of preference aligned Large Language Models (LLMs), and propose two fine-tuning approaches to address the problem: calibration-aware fine-tuning (CFT) and regularized CFT (RCFT). Claims And Evidence: The biggest problem is that though the authors state that preference alignment (like RLHF and DPO) causes poor calibration in LLMs, they do not propose any modifications to preference alignment, but instead propose fine-tuning approaches. As we know that Supervised Fine-Tuning (SFT) and RLHF are two different post-training stages, it is very essential for authors to justify why fine-tuning (rather than improving over the current RLHF framework) is a better solution, and should people use the proposed CFT before or after RLHF, or completely replace RLHF with CFT. Moreover, it is an unfair comparison between an extra domain-knowledge fine-tuned LLM and an LLM not trained with domain knowledge. Intuitively the trained LLM will have more information on the in-domain tasks. Methods And Evaluation Criteria: Theoretical explanation: The authors propose calibratable and non-calibratable regime. It is better if they can explain at first that the difference of these two regime is that whether it is possible to achieve perfect calibration without sacrificing accuracy. Proposed SFT method: It is unclear why the proposed domain-specific fine-tuning in sec. 5.1 can help improve LLM calibration. In both Sec 5.1 and 5.2, the authors no longer mention calibratable or non-calibratable regime at all. It is hard to tell when people should use methods in Sec 5.1, and when to use methods in Sec 5.2. Evaluation criteria: The authors use four RLHF-trained models and evaluate on a wide range of datasets. Theoretical Claims: The theoretical proofs looks good, though I think the authors should not restrict to only four possible answers (only four-answer setting (A,B,C,D) is allowed), as that is not practical in real applications. Experimental Designs Or Analyses: The experiments are rather complete on four tasks and the authors use a wide range of metrics: ECE, classwise-ECE, accuracy and win-rate. However, it is unclear which stage the authors apply CFT. If they apply CFT after RLHF, then that means post-training of a pre-trained model will now include three steps: traditional SFT, RLHF, and CFT. This will create a very long training pipeline which is not practical in real practice, and authors need to discuss this limitation. Since the authors propose to calibrate RLHF-trained models, it is also important to see if the calibration approach will hinder the original preference alignment ability, such as testing their models on benchmarks like Arena-Hard. Otherwise they are simply fine-tuning a model for a specific usage instead of generally improving the RLHF-trained models. In fact, there are already previous works that improve LLM calibration through RLHF and maintain preference alignment ability [1][2], and they did a lot of study on whether improving the RLHF framework will hinder other generic capabilities. [1] When to Trust LLMs: Aligning Confidence with Response Quality. ACL 2024. https://arxiv.org/abs/2404.17287 [2] Taming Overconfidence in LLMs: Reward Calibration in RLHF. ICLR 2025. https://arxiv.org/abs/2410.09724 Supplementary Material: Yes, I checked both the theoretical proofs and experiment details. Relation To Broader Scientific Literature: The authors echo with previous research that preference aligned LLMs are more overconfident. However, they are proposing an SFT-based approach to improve model calibration. There are of course methods directly improving over traditional SFT [1][2], but the authors in this paper are proposing a CFT method after the RLHF stage, they will need to justify the long post-training pipeline of: SFT -> RLHF -> CFT. [1] Teaching Models to Express Their Uncertainty in Words. https://arxiv.org/abs/2205.14334. [2] Enhancing confidence expression in large language models through learning from past experience. https://arxiv.org/abs/2404.10315. Essential References Not Discussed: Improving over SFT: [1] Enhancing confidence expression in large language models through learning from past experience. https://arxiv.org/abs/2404.10315. Improving over RLHF: [2] When to Trust LLMs: Aligning Confidence with Response Quality. ACL 2024. https://arxiv.org/abs/2404.17287 [3] Taming Overconfidence in LLMs: Reward Calibration in RLHF. ICLR 2025. https://arxiv.org/abs/2410.09724 Other Strengths And Weaknesses: The paper writing is problematic: 1. The definition of multiple-choice question in Sec. 3 and Sec 4 is too narrow: only four-answer setting (A,B,C,D) is allowed. The authors can relax to a few number of choices, not just four. 2. Many grammar errors in paper writing, just to list a few: "Another researchers teach the LLMs" -> "Other researchers teach..." "The DPO method is to directly optimize of the policy without explicitly training the reward function" -> "The DPO method is to directly optimize the policy ..." 3. Figure 2. The description of y-axis is unclear, and the authors need to mention which "value" it is. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the overall post-training pipeline when CFT is applied? 2. When to use CFT and when to use RCFT? 3. Will CFT training reduce model's preference alignment ability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer hpMa for the comments and suggestions. Below we provide our answer to your questions. >Q1. The biggest problem is ... not propose any modifications to preference alignment. A. Thank you for the comment. We would like to clarify that this is better to be considered as a realistic practical scenario instead of a problem. To begin with, consider a practical deployment scenario, which is the primary focus of our work. A practitioner aiming to develop their own model typically starts by downloading an open-source LLM and adapting it to their specific needs. Importantly, for many such models, the intermediate checkpoints (e.g., SFT stages) are not publicly available. As a result, practitioners are often unable to modify or re-run the full preference alignment process. In such cases, post-alignment fine-tuning is the most accessible strategy. Our approach is designed for this realistic setting—providing calibration improvements without requiring access to earlier training stages. We have revised the paper to make this point clear and more explicit to readers. In the scenario where the practitioner is able to go through the full training pipeline, this is also not an issue. We provide our answer below. >Q2. What is the overall post-training pipeline? Justify the long post-training pipeline. A. CFT is a post-hoc fine-tuning step applied after RLHF, and we will clarify this in the revised manuscript. Following our previous response, in scenarios where developers will go through the full training pipeline, both replacing RLHF and post-hoc fine-tuning are valid options—what matters most is which method yields better performance. Methods that aim to replace RLHF—such as PPO-C in [2]—are promising, but they are evaluated under different experimental settings, making a direct comparison difficult within a short timeframe. In our work, we demonstrate the effectiveness of our approach by outperforming the strong baseline of TS. We believe our method is robust and competitive with RLHF-replacement approaches. >Q3. It is an unfair comparison w/wo domain knowledge. A. We believe that the comparison remains appropriate for several reasons. First, regarding ECE, the concern about fairness seems to stem from intuition rooted in more standard metrics like accuracy, where additional domain knowledge can indeed lead to higher performance. However, ECE measures the discrepancy between confidence and accuracy, domain knowledge does not inherently bias ECE comparisons. Second, we have included experiments on out-domain tasks to further mitigate any potential effects from domain-specific knowledge. Finally, the ultimate goal of training language models is to obtain models that perform well in real-world settings. If incorporating additional knowledge leads to better-calibrated models, it should be considered a strength rather than a limitation. >Q4. It is unclear … hard to tell when people should use methods in Sec 5.1 and when to use methods in Sec 5.2. A: Before addressing it directly, we would like to first clarify a potential misunderstanding regarding Section 5. In Theorem 4.6, we prove that ECE ≤ TCE, which implies that minimizing TCE is sufficient for achieving low ECE. This reduces the goal of obtaining a well-calibrated model to finding one that is close to the target probabilistic model, formulated as the following constrained optimization problem: $$\max \text{ACC}(\pi) \quad \text{s.t.} \quad \text{ECE}(\pi) = 0.$$ In Section 5, we focus on how to approximate both accuracy and ECE in the context of LLMs—covered in Sections 5.1 and 5.2, respectively. Now, returning to your question: Sections 5.1 and 5.2 do not present two separate methods, but rather provide a breakdown of our final algorithm. >Q5. I think the authors should not restrict to only four possible answers (only four-answer setting (A,B,C,D) is allowed), as that is not practical in real applications. A. We chose to use the four-option format (A, B, C, D) for concreteness and readability, aiming to reduce the burden of mathematical notation and make the presentation more accessible to experimental researchers. This is explicitly stated in our paper (lines 113–114): “While we consider four options in our analysis for concreteness, the framework naturally extends to any number of alternatives.” We will further revise the text to make this generality clearer. >Q6. The original preference alignment ability. In addition to the win rate results presented in our original submission, we have conducted three more experiments related to preference alignment: CFT vs. DPO, AlpacaEval, and Arena-Hard. Due to space constraints, we refer to our response to Q4 of QE3c and the end of our response to USq4 for the detailed results. >Q7. Related work, suggestions on explaining the two regimes at first and other writings. A. We have included the 4 mentioned papers in the revised version of the paper and have revised the paper accordingly. --- Rebuttal Comment 1.1: Comment: The reviewers have addressed my concerns and I have raised my score. Please make sure to include the justification of your post training pipeline in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you for the response. We'll make sure to clearly justify the post-training pipeline in the revised version.
Summary: This paper addresses poor calibration in Large Language Models (LLMs) after preference alignment procedures like RLHF and DPO. The authors identify that preference-aligned LLMs exhibit overconfidence due to "preference collapse," where models excessively favor certain responses regardless of their correctness. They develop a theoretical framework distinguishing between "calibratable" and "non-calibratable" regimes based on model accuracy thresholds, and propose two solutions: (1) Calibration-aware Fine-Tuning (CFT) for models in the calibratable regime, which restores calibration without compromising performance, and (2) Regularized CFT (RCFT) for the non-calibratable regime, which uses EM-algorithm-based ECE regularization to balance calibration and accuracy. Experiments across four models show their methods reduce Expected Calibration Error from 14-20% to 2-7% while maintaining or improving language capabilities and alignment with human preferences. Claims And Evidence: The paper's claims are generally supported by evidence, with experimental results clearly demonstrating ECE reductions from 14-20% to 2-7% across models. The connection between preference collapse and poor calibration shows correlation but not definitive causation. Win rate metrics support their claim about maintaining alignment capabilities, but this represents just one dimension of alignment quality. While they demonstrate cross-domain generalization, more diverse evaluation scenarios would strengthen their broader applicability claims. Methods And Evaluation Criteria: The calibration-aware fine-tuning methods directly target the identified overconfidence issue through targeted loss functions, which is sensible given the problem definition. Their use of multiple-choice QA datasets is appropriate since they provide clear ground truth for calculating calibration metrics. The confidence and classwise ECE metrics are standard for calibration assessment, making their evaluation protocol methodologically sound. Their evaluation balances both in-domain and out-domain generalization, tests multiple model architectures aligned with different methods (DPO and RLHF), and importantly, measures win rate on preference pairs to verify alignment preservation. This comprehensive approach addresses the key concern that improving calibration might compromise alignment quality. One limitation is that their evaluation focuses primarily on multiple-choice settings rather than free-form text generation, which would provide a more complete picture of calibration in real-world LLM deployments. Theoretical Claims: I verified the paper's theoretical proofs, focusing on: Proposition 4.1: Correctly proves that probabilistic generative models achieve zero ECE by definition, as predicted probabilities match observed frequencies. Theorems 4.4 and 4.5 (Upper/Lower Bounds of TCE): The bounds are correctly established by constructing appropriate examples, but the constant C in the lower bound lacks specific derivation, making practical application less clear. Theorem 4.6 (Upper bound for ECE): This uses triangle inequality to establish that classwise ECE is bounded by TCE, which is mathematically valid. The calibratable/non-calibratable regime distinction follows logically from these bounds, though the paper simplifies this conceptually when moving to practical implementation. The EM-algorithm for probability estimation is theoretically sound, though the proof for convergence is not provided. Experimental Designs Or Analyses: The experimental design is generally sound but has several limitations. While they use appropriate models (four different architectures with both DPO/RLHF alignment), metrics (ECE, accuracy, win rate), and testing conditions (in-domain and cross-domain), the analysis lacks statistical significance testing and confidence intervals. The ablation studies are minimal, only comparing LSFT2 vs LECE in the appendix. The regularization parameter λ is fixed at 1 without sensitivity analysis, and the win rate metric only captures binary preference alignment rather than nuanced quality dimensions. Supplementary Material: No. Relation To Broader Scientific Literature: This work extends prior work on LLM calibration by Jiang et al. (2021), Xiao et al. (2022), and Chen et al. (2022) who identified miscalibration issues in LLMs, but specifically addresses the previously unexamined problem of how preference alignment techniques (RLHF/DPO) impact calibration. Essential References Not Discussed: No. Other Strengths And Weaknesses: See comments above. Other Comments Or Suggestions: No. Questions For Authors: 1: How might your calibration methods be adapted for cases where we only have black-box API access to LLMs without fine-tuning capabilities? 2: Have you considered how quantization or other efficiency techniques might impact calibration properties in aligned models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer USq4 for the insightful comments and questions. Below we provide our responses to the questions. >Q1. One limitation is that their evaluation focuses primarily on multiple-choice settings rather than free-form text generation, which would provide a more complete picture of calibration in real-world LLM deployments. A. Thank you for the comment. We agree that calibration in free-form generation is important. We focus on multiple-choice settings as they provide a controlled and quantifiable evaluation of calibration. Extending our method to free-form generation is an exciting direction, and we will discuss it as future work. >Q2. the constant C in the lower bound lacks specific derivation, A. Thank you for pointing this out. The constant C in the lower bound arises from solving a min-max optimization problem, which makes it difficult to obtain a closed-form expression. While we do not derive an explicit value for C, we will clarify its origin and theoretical role in the revised manuscript. Characterizing or approximating C more precisely is an interesting direction for future work. >Q3. The EM-algorithm for probability estimation is theoretically sound, though the proof for convergence is not provided. A. Thank you very much for the question. When the optimization is performed over probability distributions (rather than neural network parameters), the convergence of our EM algorithm follows from standard EM theory, and we take this as given rather than presenting it as a contribution of our work. However, when optimizing over neural network parameters, convergence is no longer guaranteed due to the non-convexity and complexity of the underlying function space. We will include a discussion on the convergence properties of the algorithm in the revised manuscript to clarify these distinctions. >Q4. The experimental design is generally sound but has several limitations. While they use appropriate models (four different architectures with both DPO/RLHF alignment), metrics (ECE, accuracy, win rate), and testing conditions (in-domain and cross-domain), the analysis lacks statistical significance testing and confidence intervals. The ablation studies are minimal, only comparing LSFT2 vs LECE in the appendix. The regularization parameter λ is fixed at 1 without sensitivity analysis, and the win rate metric only captures binary preference alignment rather than nuanced quality dimensions. A. Thank you very much for the helpful comments and suggestions. We have now conducted a more comprehensive ablation study on the hyperparameter λ to further support our analysis. The results of this study have been incorporated into the revised manuscript. |$\lambda$|0||| 0.4|||1|||1.8|||ECE_only||| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |Metric|ECE|cwECE|Acc|ECE|cwECE|Acc|ECE|cwECE|Acc|ECE|cwECE|Acc|ECE|cwECE|Acc| |Llama-3.1-8B-Tulu| 0.0883 | 0.0808 | 0.8964 | 0.1535 | 0.1014 | 0.8409 | 0.0897 | 0.0771 | 0.8341 | 0.0178 | 0.0106 | 0.4366 | 0.0002 | 0.0081|0.2475| |Vicuna-7B| 0.1219 | 0.0774 | 0.8322 | 0.1620 | 0.0991 | 0.7315 | 0.0474 | 0.0459 | 0.6015 | 0.1052 | 0.0799 | 0.3877 | 0.0130 | 0.0270 | 0.2290 | |Olmo2-7B| 0.1003 | 0.0992 | 0.8846 | 0.1771 | 0.1008 | 0.8427 | 0.0989 | 0.0806 | 0.8510 | 0.0038 | 0.0113 | 0.4901 | 0.0030 | 0.0043 | 0.2765| |Mistral-7B| 0.0976 | 0.0785 | 0.9091 | 0.1316 | 0.0733 | 0.8085 | 0.0979 | 0.0877 | 0.8297 | 0.0366 | 0.0617 | 0.4217 | 0.0021 | 0.0108 | 0.2670 | >Q5. How might your calibration methods be adapted for cases where we only have black-box API access to LLMs without fine-tuning capabilities? A. Thank you for the question. Our approach relies on fine-tuning the model, and is therefore not applicable to black-box APIs. Adapting our method to black-box models would go beyond a simple extension—it would effectively require developing an entirely new approach. For improving calibration in black-box settings, techniques such as prompt engineering or encouraging the model to explicitly express its confidence are more suitable. We will include this discussion in the revised version of the paper. >Q6. Have you considered how quantization or other efficiency techniques might impact calibration properties in aligned models? A. Thank you very much—this is a great question. Previously, we did not explore the impact of quantization or other efficiency techniques on calibration. These methods can potentially alter the model's confidence estimates and thus affect calibration quality. We agree this is an important and practical direction, especially for deployment scenarios, and we will include it in our discussion of future work. ___ To all reviewers: Additional experiments of alignment ability. Due to limited time, only 2 models for Arena-hard. |Alpaca-Eval|DPO| CFT| RCFT | |-|-|-|-| | Llama-3.1-8B-Tulu | 21.4|22.6|19.6| | Vicuna-7B| 2.6|2.6|3.6| | Olmo2-7B | 24.2|22.9|23.1| | Mistral-7B| 26.0|26.8|25.2| |Arena-hard|||| | Olmo2-7B|19.4|19.2|20.2| | Mistral-7B| 18.9|18.3|18.0|
Summary: This paper addresses the calibration issue in aligned large language models (LLMs) and proposes a calibration-aware fine-tuning approach to restore proper uncertainty quantification in these models. The motivation stems from the observation that alignment techniques can distort model confidence, leading to miscalibrated probabilities in downstream tasks. The proposed method introduces a fine-tuning procedure that explicitly optimizes for calibration metrics while preserving the alignment properties of the LLM. The approach is designed to be agnostic to the underlying alignment strategy, making it adaptable to different LLM architectures and training paradigms. The authors provide theoretical insights into how alignment affects calibration and demonstrate the effectiveness of their method through extensive empirical evaluations on multiple benchmarks. ## update after rebuttal First, I would like to apologize for not being able to respond directly to the authors during the discussion phase, due to limitations of the review system. Instead, I am using this update to clearly state my current position. The authors have provided a strong and effective rebuttal that addresses many of my original concerns. In particular, I sincerely appreciate their effort in conducting additional experiments based on theoretically justified bin sizes—especially considering how challenging such LLM experiments must be. That said, I still have two remaining concerns: - **Presentation clarity:** I believe the paper does not sufficiently distinguish between the ECE used for evaluation and the ECE used in the objective function. Given that multiple definitions of “ECE” appear throughout the paper, I found it somewhat confusing—even as someone who works regularly with calibration metrics. Improving this distinction would greatly help with readability. - **Distinction between fine-tuning and standard classification tasks:** While I agree with the authors’ explanation regarding how labels are handled differently in the objective, I still find it unclear why methods from classification settings would not be applicable here.After all, the goal—minimizing ECE—is common to both this work and prior approaches in classification calibration. Is the difference simply in how probabilities are computed with respect to the labels? (It is possible that I am still misunderstanding this point.)I believe it is crucial to clearly articulate the distinction between the two approaches.If that proves difficult, an alternative could be to empirically compare the proposed method with existing calibration regularization techniques developed for classification—such as smooth CE or other differentiable calibration metrics—applied in this context. This would be a simple yet effective way to differentiate your method. I would like to raise these points for discussion among the reviewer panel. For now, I will keep my original score, but depending on the outcome of the discussion, I may consider increasing it. Lastly, I would like to emphasize that I find this to be a very interesting and promising paper. Claims And Evidence: ### Claim 1 (Explaining why alignment affects calibration): The paper provides an intuitive argument that preference alignment alters model confidence by enforcing human-desired responses, potentially distorting probability estimates. While it is not entirely clear which part of the results explicitly show the critical answer for "why preference alignment affects calibration", the fine-tuning approach naturally introduces a trade-off between accuracy and calibration error, which is well-motivated. The design of the objective function balances these two aspects in a reasonable manner, making the proposed method a natural approach to addressing miscalibration in aligned LLMs. ### Claim 2 (Proposed method mitigates calibration issues): While the paper focuses on the LLM fine-tuning, the underlying problem is fundamentally equivalent to the long-standing question in the machine learning community: how to balance classification accuracy and calibration in predictive models. Although the specific task setting is different, the challenge addressed here is closely related to classical studies in calibration-aware learning. The proposed fine-tuning approach somewhat reduces calibration error in experiments. However, the justification for why this method is the best solution is weak. - Why is fine-tuning with a calibration objective superior to alternative methods such as temperature scaling? - Good predictive accuracy and calibration simultaneously have been widely studied in general machine learning classification problems (please see the following references). Beyond differences in objectives and tasks, what is the fundamental distinction between these approaches and the proposed method? Why is the proposed method preferable to extending those studies? Could some of the methods proposed in the following papers be incorporated into the fine-tune objective of the LLM without too much computational complexity? - [Kumar et al., ICML2018](https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf#:~:text=Trainable%20Calibration%20Measures%20For%20Neural,so%20compro%02mise%20the%20many%20legitimately) - [Krishnan et al, NeurIPS2020](https://arxiv.org/pdf/2012.07923) - [Karandikar et al., NeurIPS2021](https://arxiv.org/pdf/2108.00106) - [Popordanoska et al., NeurIPS2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/33d6e648ee4fb24acec3a4bbcd4f001e-Paper-Conference.pdf#:~:text=We%20propose%20a%20tractable%2C%20differentiable%2C,ECEKDE%20scales%20well) ### Claim 3 (Empirical results demonstrate effectiveness without degrading model performance): In p.6, the Evaluation Metric section defines ECE as a nonparametric estimator based on binning. Recent studies have shown that this estimator suffers from significant estimation bias (e.g., [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf), for binary classification; [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227), for multiclass classification). According to these works, binning-based ECE has a slow convergence rate of $O(1/n^{1/3})$ and introduces a significant bias. The optimal number of bins to minimize this bias in an upper-bound sense is $O(n^{1/3})$, but this paper fixes the number of bins to $10$, which likely results in substantial estimation bias. Given this, the numerical results presented in the paper should be interpreted with caution, as the validity of the estimated calibration error is questionable. Methods And Evaluation Criteria: Many of the concerns regarding the methods and evaluation criteria have already been discussed in the above section. To briefly summarize: - While the paper focuses on LLM fine-tuning, the underlying problem is fundamentally equivalent to the long-standing question in the machine learning community: how to balance classification accuracy and calibration in predictive models. Although the specific task setting is different, the challenge addressed here is closely related to classical studies in calibration-aware learning. The justification for why this method is the best solution remains weak: - Why is fine-tuning with a calibration objective superior to alternative methods such as temperature scaling? - Could existing methods from classification calibration research (e.g., those by [Kumar et al., ICML2018](https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf#:~:text=Trainable%20Calibration%20Measures%20For%20Neural,so%20compro%02mise%20the%20many%20legitimately), [Krishnan et al, NeurIPS2020](https://arxiv.org/pdf/2012.07923), [Karandikar et al., NeurIPS2021](https://arxiv.org/pdf/2108.00106), [Popordanoska et al., NeurIPS2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/33d6e648ee4fb24acec3a4bbcd4f001e-Paper-Conference.pdf#:~:text=We%20propose%20a%20tractable%2C%20differentiable%2C,ECEKDE%20scales%20well) be incorporated into the fine-tuning objective without excessive computational complexity? - The empirical evaluation has methodological concerns, particularly regarding the estimation bias in ECE due to fixed binning (see [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf), [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227)). The choice of 10 bins likely introduces substantial estimation bias, affecting the validity of the reported calibration improvements. Theoretical Claims: I have reviewed the proofs and found no obvious errors; the mathematical derivations appear to be correct. Assuming that the proofs are entirely correct, the claims derived from the theorems are reasonable in themselves. However, it is not clear that these results provide a direct answer to the question of "why preference alignment affects calibration." Instead, the theoretical findings seem to demonstrate the trade-off between accuracy and calibration, a well-known issue that has been widely discussed in the general machine learning community. From a scientific perspective, a more appropriate claim may be that the paper formalizes this accuracy-calibration trade-off within the context of fine-tuning LLMs, rather than providing a direct theoretical justification for the miscalibration induced by preference alignment. Experimental Designs Or Analyses: As I mentioned above, the evaluation relies on ECE with binning, which introduces significant estimation bias. As shown in [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf) and [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227), binning-based ECE has a slow convergence rate and can significantly over- or under-estimate calibration error. The choice of $10$ bins likely introduces substantial bias, affecting the reliability of reported calibration improvements. Alternative or bias-corrected calibration metrics should be considered to ensure more robust evaluation. Supplementary Material: I have reviewed the supplementary material, including the theoretical proofs, with a reasonable level of rigor. Additionally, I have identified a few typos and graphical errors in the supplementary material. These are listed in the Other Comments Or Suggestions section. Relation To Broader Scientific Literature: This work is closely related to prior research in calibration-aware learning in classification models, where calibration metrics are incorporated as regularization terms in the objective function to jointly optimize predictive accuracy and calibration performance. Although this paper focuses on fine-tuning LLMs, the core problem it addresses is fundamentally equivalent to the long-standing challenge in classification models: balancing accuracy and calibration in predictive learning. Several prior studies have explored this challenge: - [Kumar et al., ICML2018](https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf#:~:text=Trainable%20Calibration%20Measures%20For%20Neural,so%20compro%02mise%20the%20many%20legitimately): One of the earliest works to directly integrate Expected Calibration Error (ECE) optimization into model training. They propose the Maximum Mean Calibration Error (MMCE), a new metric based on RKHS kernels, which is added as a regularization term to the cross-entropy loss. - [Krishnan et al, NeurIPS2020](https://arxiv.org/pdf/2012.07923): Introduce a differentiable loss that explicitly penalizes the discrepancy between model confidence and actual correctness, enabling direct optimization of calibration error. - [Karandikar et al., NeurIPS2021](https://arxiv.org/pdf/2108.00106): Develop a differentiable calibration loss by continuous relaxation of histogram binning for ECE computation. Their method, when incorporated into training, significantly reduces ECE for individual models. - [Popordanoska et al., NeurIPS2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/33d6e648ee4fb24acec3a4bbcd4f001e-Paper-Conference.pdf#:~:text=We%20propose%20a%20tractable%2C%20differentiable%2C,ECEKDE%20scales%20well): Propose a Kernel Density Estimation-based ECE estimator, making Canonical Calibration Error (an extension of ECE) differentiable. This allows it to be integrated into the training objective, achieving a favorable trade-off between accuracy and calibration performance. Beyond calibration-aware training methods, the validity of ECE-based uncertainty estimation itself has been questioned in recent studies. As highlighted by [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf) and [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227) and [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227), binning-based ECE estimators suffer from significant estimation bias and slow convergence rates. Additionally, some studies have provided theoretical evaluations of bias in ECE estimators based on uniform-mass binning (UMB)([Gupta et al., ICML2021](https://arxiv.org/pdf/2105.04656), [Gupta et al., NeurIPS2020](https://arxiv.org/pdf/2006.10564)). There are some other related studies ([Gruber et al., NeurIPS2022](https://arxiv.org/pdf/2203.07835), [Sun et al., NeurIPS2023](https://arxiv.org/pdf/2305.10886)). It is unclear whether this paper uses UMB or uniform-width binning, but the choice of binning strategy significantly impacts bias and variance in calibration error estimation. Given these findings, it is important to acknowledge the potential limitations of ECE-based evaluation and consider methods that minimize bias, such as properly setting the number of bins or using alternative bias-corrected estimators. Including such discussions would strengthen the empirical robustness of this work. Essential References Not Discussed: To properly position the contribution of this work and ensure a fair evaluation of calibration performance, it is essential to reference and discuss the prior research on calibration-aware learning and bias in ECE estimation that has been highlighted in this review. - The paper should cite and discuss prior works on calibration-aware learning in classification models (e.g., [Kumar et al., ICML2018](https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf#:~:text=Trainable%20Calibration%20Measures%20For%20Neural,so%20compro%02mise%20the%20many%20legitimately), [Krishnan et al, NeurIPS2020](https://arxiv.org/pdf/2012.07923), [Karandikar et al., NeurIPS2021](https://arxiv.org/pdf/2108.00106), [Popordanoska et al., NeurIPS2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/33d6e648ee4fb24acec3a4bbcd4f001e-Paper-Conference.pdf#:~:text=We%20propose%20a%20tractable%2C%20differentiable%2C,ECEKDE%20scales%20well)), which are highly relevant to the methodology of this paper. - The validity of ECE-based evaluation should be reconsidered, given recent studies showing significant bias in binning-based ECE estimators ([Gruber et al., NeurIPS2022](https://arxiv.org/pdf/2203.07835), [Sun et al., NeurIPS2023](https://arxiv.org/pdf/2305.10886), [Gupta et al., ICML2021](https://arxiv.org/pdf/2105.04656), [Gupta et al., NeurIPS2020](https://arxiv.org/pdf/2006.10564), [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf), [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227)). If ECE is used as a primary evaluation metric, proper adjustments (e.g., optimal bin selection or alternative estimators) should be considered. Other Strengths And Weaknesses: ### Strengths - Addresses an important issue in LLM alignment: The paper tackles the important problem of calibration degradation due to alignment, which is highly relevant given the widespread use of LLMs in real-world applications. - Proposes a well-motivated fine-tuning objective: The method explicitly balances accuracy and calibration, making it a conceptually reasonable approach to mitigating miscalibration. - Provides a theoretical framework: While certain connections could be clarified further, the paper presents a structured theoretical analysis, which helps to ground the empirical observations. - Extensive empirical evaluation across multiple LLMs: The paper tests its method on various LLM architectures, demonstrating general applicability. ### Weaknesses Lack of discussion on the limitations of the proposed method: The paper does not provide a sufficient discussion on when and where the proposed method is expected to perform well or poorly. For example: - What are the computational costs associated with this fine-tuning method, and how do they compare to other calibration techniques? - Are there trade-offs in terms of sample efficiency, convergence rate, or stability? Providing some empirical evaluation regarding this topic would help clarify the practical applicability of the proposed method. Other Comments Or Suggestions: - In Definition 4.2: Is it needed to provide the definition of $\mathrm{ECE}(\pi_{\theta})$? - In Appendix A.2, line 575: "Expected Calibration Error (ECE) Naeini et al. (2015),...": I think you should use the bibtex-style citation here for [Naeini et al. (2015)]. - In Appendix A.2, line 576: (typo?) "Mlticlass-ECE..." --> "Multiclass-ECE..." - In Figures 7 and 8 (p.20-21): Why is the plot of (g) for "Prob. of Option A"? I think you should show the plot for "Prob. of Option B" instead. Questions For Authors: The following questions are critical for improving the clarity and contribution of the paper. If these issues are appropriately addressed, I will consider raising my overall score. - Comparison with Alternative Calibration Approaches: - Why is fine-tuning with a calibration objective superior to existing post-hoc calibration methods such as temperature scaling? - What is the relationship and significant difference between this paper and calibration-aware training in classification models, such as those by [Kumar et al., ICML2018](https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf#:~:text=Trainable%20Calibration%20Measures%20For%20Neural,so%20compro%02mise%20the%20many%20legitimately), [Krishnan et al, NeurIPS2020](https://arxiv.org/pdf/2012.07923), [Karandikar et al., NeurIPS2021](https://arxiv.org/pdf/2108.00106), and [Popordanoska et al., NeurIPS2022](https://proceedings.neurips.cc/paper_files/paper/2022/file/33d6e648ee4fb24acec3a4bbcd4f001e-Paper-Conference.pdf#:~:text=We%20propose%20a%20tractable%2C%20differentiable%2C,ECEKDE%20scales%20well)? - Evaluation Metrics and ECE Bias: - The paper uses binning-based ECE, which is known to suffer from significant estimation bias (e.g., [Futami & Fujisawa, NeurIPS2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/9961e42624a6c083279303767c73269d-Paper-Conference.pdf), [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227))). - Could you provide revised experimental results using an optimal bin size setting as suggested by these studies? I think this is crucial to confirm whether the reported empirical results are convincing or not. - Does the paper employ uniform-mass binning (UMB) or uniform-width binning? Since the choice of binning strategy significantly impacts bias and variance, providing clarification would be valuable. - Limitations of the Proposed Method: - In which scenarios is the proposed method expected to perform well, and in which cases might it fail? - Are there certain types of LLM architectures, alignment techniques, or dataset conditions where this approach is less effective? - What are the computational costs of this fine-tuning method? How do they compare to other calibration techniques? Is the ECE term in the proposed objective function differentiable? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer Ak1F for the insightful comments and questions. Below we provide our responses to the questions. >Q1. Why fine-tuning with a calibration objective superior to post-hoc calibration methods e.g. TS? A. While temperature scaling (TS) is a strong baseline, our method's superiority stems from addressing fundamental limitations of post-hoc calibration in LLMs: 1. Our approach explicitly minimizes the discrepancy between accuracy and confidence—the definition of ECE—rather than relying on a single scaling parameter. This direct optimization improves calibration while maintaining or enhancing performance. This aligns with previous work on training classifiers with calibration objectives, such as the AvUC loss in [Krishnan et al](https://arxiv.org/pdf/2012.07923) and the PAC-Bayes-based objective in [Fujisawa & Futami, 2024](https://arxiv.org/pdf/2406.06227), which demonstrated the superiority of optimization-based strategies over post-hoc methods. 2. Our approach generalizes better to unseen data and distribution shifts compared to TS, which risks overfitting to specific validation sets. Table 2 clearly demonstrates this advantage in out-domain scenarios—for example, with Olmo2-7B, our CFT method reduces out-domain cw-ECE to 0.0637 (vs. TS's 0.1196) while improving accuracy to 0.7085 (vs. DPO's 0.6635). Distribution shifts are common in language tasks, and TS is insufficient to handle such variations. We will include these points in our revised manuscript. >Q2. Relationship between this paper and calibration-aware training in classification models? A. The referenced works propose various calibration-aware training objectives for classification models, such as MMCE, AvUC, S-AvUC, and SB-ECE. Some of these are designed as accuracy–calibration trade-off objectives. Our approach shares this general principle, as it can also be viewed as an accuracy–calibration trade-off method. The key difference is that our accuracy and calibration objectives are specifically designed for generative models and LLMs, rather than classification models. For the accuracy objective, we use the SFT loss on next-token prediction. This loss not only promotes accuracy but also supports instruction following, knowledge grounding, and coherent text generation—all essential to LLMs. For the ECE objective, we formulate calibration as a generative modeling problem and apply an EM algorithm to optimize it accordingly. In summary, our method is tailored for LLMs, which represents a significant departure in both design and application. >Q3. Evaluation Metrics and ECE Bias and optimal bin size. A. Thank you for the helpful suggestion. The two referenced papers on ECE bias and optimal bin size are indeed insightful. In our work, we use the UWB strategy, and we will make this choice explicit in the revised manuscript. According to the referenced work, the optimal bin size for the multiclass setting scales as $O(n^{1/3})$. However, to apply this practically, one needs the exact constant rather than just the asymptotic rate. Upon further examination, we found that the optimal bin size contains a Lipshitz constant of the model, in a rate of $(1 + L)^{2/3} n^{1/3}.$ In practice, estimating the Lipschitz constant for transformers is challenging, as it is known to be potentially very large and difficult to compute reliably. Nonetheless, we can still make use of the theoretical rate $n^{1/3}$ as in the two mentioned papers. In our experiments, the sample size is 3,000, which implies an optimal bin size on the order of O(14.4). The bin size of 10 used in our paper is reasonably close to this rate, suggesting that our choice is consistent with the theoretical guidance. >Q4. In which cases the proposed methods perform well and fail? A. Our approach is specifically designed and performs well for LLMs which are originally well-calibrated but become poorly calibrated after preference alignment. In other scenarios—such as (1) traditional classification tasks, or (2) cases where the LLM is not well-calibrated to begin with—our method may not be as effective. >Q5. Certain types of LLM architectures, alignment techniques ... where this approach is less effective? A. In our experiments, we evaluate the method across different architectures, alignment techniques, and datasets to demonstrate its generality. However, due to computational constraints, we did not experiment with large-scale models such as 70B. While the effectiveness may vary at that scale, we believe our method is conceptually scalable and can be applied to larger models with appropriate resources. >Q6. Computational costs? Compare to other techniques? ECE term differentiable? A. On two A100 40GB GPUs, our approach takes approximately 1.5 hours to run for 5 epochs. The training time is comparable to label smoothing, but significantly slower than temperature scaling, as the latter does not require fine-tuning model weights. In addition, Our ECE term is differentiable. --- Rebuttal Comment 1.1: Comment: Thank you very much for your thoughtful and respectful rebuttal. I truly appreciate the care you have taken in addressing each point in detail. ## Regarding Q1: Thank you for your detailed explanation. I would strongly encourage incorporating this discussion into the main text. Fujisawa & Futami (2024) also discuss how Temperature Scaling (TS) can sometimes achieve good recalibration performance depending on the order of the optimal bin size used for ECE estimation. In light of this, and in conjunction with the discussion in Q3, I would recommend re-evaluating your performance results to ensure validity of your proposed method. ## Regarding Q2: I understand that your setting—fine-tuning with preference data—is different from standard classification, and that your proposed objective is tailored for LLMs. The method is indeed interesting. That said, when viewed from a broader perspective, your formulation seems conceptually equivalent to a widely discussed approach in the literature that minimizes an objective of the form: (accuracy loss) + $\lambda$ × (calibration regularization). Therefore, I believe it would be helpful to more deeply explore and clarify your argument that classification and fine-tuning are fundamentally distinct. Are they truly different in essence? In basic preference fine-tuning settings, models are often trained to behave in a way that aligns with binary labels indicating whether outputs are preferred or not. Despite differences in reward function design or likelihood formulation, the underlying mechanism often involves computing binary classification probabilities via a logistic function. A deeper clarification of how your method fundamentally differs from classification-based regularization would help readers better appreciate your contribution. ## Regarding Q3: You are absolutely right about the Lipschitz constant. It makes sense to choose it based on the order of $\mathcal{O}(n^{1/3})$ in practice. That said, in my personal experience, the actual ECE value can vary significantly depending on whether the number of bins is 10 or 14. I understand that re-running LLM experiments can be computationally intensive, but re-evaluating the performance—including that of TS—under these settings would further substantiate your claims. Ideally, showing the updated numbers would be most helpful. ## Regarding Q4: Thank you for proposing an intriguing hypothesis. Could you consider expanding this point into a discussion of the potential limitations of your method? Clearly stating in which cases the method is effective and where it may face challenges would not only enhance the practical value of your work but also help guide future research. ## Regarding Q5: I appreciate your response regarding computational constraints. If possible, I would suggest including even a theoretical discussion on computational complexity or convergence behavior. (Although I may be mistaken, would the E-step, for example, have a complexity of $\mathcal{O}(N × M^{2})$?) ## Regarding Q6: I may have overlooked something, but I was curious—how is the objective based on nonparametric estimation through binning differentiable? Or is it that the ECE used for evaluation differs from the one used in your objective? Section 3 introduces cw-ECE and conf-ECE, whereas Section 4 refers simply to “ECE,” which left me slightly confused. It would be very helpful if you could clarify this distinction. --- Reply to Comment 1.1.1: Comment: Thanks for the responses, and we are sorry for the delayed reply, as conducting additional experiments required some time. >Q1 & 3. Thank you for the suggestion. We will incorporate the discussion into the main text. Following the the referenced paper, we have chosen the bin size=14, and re-evaluated all four methods across architectures, types of ECE, in both in/out-domain settings. We observe that CFT and RCFT consistently restore the ECE of DPO models. The comparison between our approach and TS remains consistent with our original findings. In 5 out of 8 conf-ECE comparisons, CFT outperforms TS. Additionally, we find that the ECE values of TS and CFT are closer under the new binning choice, which partially supports the idea that this bin size reduces ECE bias. We will include a discussion on the bin size rate of the referenced paper and these updated experimental results, in our revised paper. ||Llama3.1 In|Out|Olmo2 In|Out|Vicuna In|Out|Mistral In|Out| |-|-|-|-|-|-|-|-|-| |DPO|0.1861/0.0988|0.1188/0.0657|0.1370/0.0773|0.0914/0.0630| 0.1418/0.0664|0.0888/0.0993|0.1979/0.1010|0.1346/0.1187| |TS|0.1158/0.0349|0.0559/0.0256|**0.0490**/0.0329| **0.0272**/0.0252|0.0377/0.0220|**0.0297**/0.0523|0.0771/0.0380|0.1093/0.0582| |CFT|**0.0441**/0.0418|**0.0520**/0.0344|0.0587/0.0376|0.0573/0.0356|**0.0216**/0.0295|0.0308/0.0516|**0.0602**/0.0207|**0.0511**/0.0601| |RCFT|0.1011/0.0783|0.0801/0.0525|0.0730/0.0365|0.0663/0.0512|0.0508/0.0397|0.0677/0.0552|0.0817/0.0457|0.0658/0.0506 The results are reported in the format: conf-ECE / cw-ECE. >Q2. Let's clarify the distinction between preference fine-tuning and classification. **Objective.** As you mentioned, the objective of preference alignment can be viewed as a classification loss with two classes “winner” or “loser”. However, this is fundamentally different from a standard classification problem. To illustrate, consider a simplified setting with three responses: $r_1, r_2, r_3$, where human preferences are $r_1 > r_2 > r_3$. The preference alignment objective introduces the following pairwise comparisons into the loss: $(r_1, r_2)$, $(r_2, r_3)$, and $(r_1, r_3)$. In the first comparison, $r_2$ is labeled as the loser since $r_1$ is preferred. However, in the second comparison, $r_2$ is labeled as the winner because it is preferred over $r_3$. Thus, the same response ($r_2$) appears as both a winner and a loser depending on the comparison. This shows that there is no consistent global label for each response—labels are defined only in the context of pairwise comparisons. A single response can belong to both the winner and loser classes in different pairs. In contrast, standard classification tasks (such as MNIST digit recognition) involve a clear and consistent labeling scheme. Each image belongs to exactly one of the digit classes (0 through 9), and these labels are globally defined and mutually exclusive. **Evaluation of calibration.** In standard classification settings, calibration error is typically evaluated on the same task. In contrast, for LLMs, calibration is often evaluated on different tasks—such as multiple-choice questions with four options (four classes)—which are distinct from the preference alignment task (two classes). Therefore, while adding a calibration regularization term is common practice in standard classification settings, directly applying this approach to the preference alignment loss lacks principled justification in this context. In contrast, our approach—post-hoc fine-tuning after the alignment stage—is more intuitively and conceptually appropriate, since both the accuracy and calibration losses are designed with the downstream evaluation task in mind. >Q4. Sure, we will include the discussion of limitations, as outlined in our previous responses, in the revised version of the paper. >Q5: The total computational complexity is given by $$L(5n + M + n/B),$$ where $L$ is the number of epochs and $B$ is the batch size. E-step. It requires n steps from the outer loop. We acknowledge that the inner loop, as currently written, may be misleading. It follows the standard EM-algorithm format but does not reflect the actual computational procedure. In practice, there is no need to iterate over all $M$ bins to determine bin membership for each sample. M-step. - $M$ steps to update the accuracies of the \(M\) bins. - $4n$ steps to update the four target probabilities for each samples. - $n/B$ gradient steps to update the model parameters. Regarding convergence, we will include a discussion on the standard convergence analysis of EM algorithms in the revised version of the paper. > Q6. The ECE in the objective and for evaluation are different. In Section 5.2, we define the ECE loss as: $$ L_{ECE}=D(p(x),conf_\pi(x)), $$ where D is a differentiable divergence (we use the MSE). Regarding the binning process: $p(x)$ is fixed after binning and its gradient is detached. As a result, the overall loss is differentiable.
Summary: The paper tries to answer a well-known question: why an aligned model is not well-calibrated and how to fix it? Authors start with a probabilistic generative model and define TCE, and then derive an upper and lower bound of TCE. Then, depends on the accuracy of the current model, one can either get calibration without sacrificing accuracy or not. The authors state that, most cases are calibratable. To recover the calibration of models, authors propose an EM algorithm to compute calibration loss. Together with SFT loss, the CFT get better calibration loss and accuracy. Claims And Evidence: * Upper and lower bound of TCE. Suppored by Thm 4.4-4.6 * Calibratable and Uncalibratable cases based on Accuracy. Supported by Thm 4.4-4.6. * EM algorithms to recover calibration. Supported by experiment. Methods And Evaluation Criteria: Methods and evaluation makes sense. Theoretical Claims: I did not carefully check the proof Thm4.4 - 4.6, but they seem correct. ### Question & Weakness * One theory claims that both calibratable and un-calibratable cases exist. Can we draw any senses about when the model falls into each case from a theoretical perspect? * The theory contribution is unclear. If my understanding is correct, the theory mainly states that there are two possible scenarios and does not directly relate to other parts of the paper, such as algorithms. The insight drawn from the theory is vague. Experimental Designs Or Analyses: The experiment is overall sound. ### Question and Weakness * There is only one baseline (temp scaling); it would be better to compare with other baselines such as label smoothing, [1], etc. * A better comparison of the Win rate is between CFT and DPO-generated responses since response qualities may decrease but still outperform the one being compared in the current setting. [1] https://arxiv.org/abs/2102.09690 Supplementary Material: N/A Relation To Broader Scientific Literature: The paper aligns a line of calibration work such as temperature scaling, Say-Self, etc. This work gives a theory analysis of calibration and a new EM calibration algorithm, which differs from prior works. Essential References Not Discussed: Some relevant methods are not discussed: [1] https://arxiv.org/abs/2102.09690 [2] https://arxiv.org/pdf/2405.20974 Other Strengths And Weaknesses: Other strengths: * The paper is generally well-written. Other weakness: * See above Other Comments Or Suggestions: ## update after rebuttal Since the authors addressed my concerns, I raised my score to 3. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer QE3c for the insightful comments and questions. Below we provide our responses to the questions. >Q1. Can we draw any senses about when the model falls into each case from a theoretical perspect? A. This is a great question. Determining which regime a model falls into ultimately reduces to understanding whether its accuracy exceeds a certain threshold. This, in turn, depends on two key factors: 1. What is the accuracy threshold for a given neural network architecture? 2. Given such an architecture, which training algorithms lead the model to fall into each regime? Answering these questions requires a deeper theoretical analysis of the properties of transformers, which is currently an open and challenging direction. While we do not yet have a definitive theoretical answer, our experimental results provide some insights. We will add a discussion of this point in the final section and consider it an important direction for future work. >Q2. The theory contribution is unclear. If my understanding is correct, the theory mainly states that there are two possible scenarios and does not directly relate to other parts of the paper, such as algorithms. The insight drawn from the theory is vague. A. Our theoretical results are directly connected to the algorithmic design and other components of the paper. Let us clarify. In Theorem 4.6, we prove that ECE ≤ TCE. In other words, obtaining a well-calibrated model reduces to finding one that is close to the **target probabilistic model**, i.e., the solution to the following constrained optimization problem: $$ \max \text{ACC}(\pi) \quad \text{s.t.} \quad \text{ECE}(\pi) = 0. $$ In Section 5, we discuss how to approximate ACC and ECE in the context of LLMs—specifically in Sections 5.1 and 5.2, respectively. Having defined approximations for the ACC and ECE losses, we introduce an EM algorithm to learn the underlying probabilistic model. This choice is motivated by our theoretical formulation of calibration as a probabilistic inference problem, for which EM is a natural and widely used solution. >Q3. There is only one baseline (temp scaling); it would be better to compare with other baselines such as label smoothing, [1], etc. A. Thank you for the question. First, we would like to emphasize that temperature scaling is a strong baseline for reducing ECE. Prior work [3,4] has shown that it is generally more effective than label smoothing and other techniques for improving calibration. Nonetheless, we conducted additional experiments and found that, similar to image classification tasks, label smoothing remains less effective than temperature scaling on LLMs as well. We will add the experiments to our revised paper. | Model | conf-ECE | | cw-ECE | | Accuracy | | |-|-|------------|-|-|-|-| | | In-Domain | Out-Domain | In | Out | In | Out | | Llama3.1-8B-Tulu | 0.1898 | 0.1009 | 0.0692 | 0.0639 | 0.6372 | 0.7116 | | Vicuna-7B | 0.1221 | 0.0823 | 0.0517 | 0.0544 | 0.4517 | 0.5767 | | Olmo2-7B | 0.1010 | 0.0499 | 0.0791 | 0.1298 | 0.6808 | 0.6431 | | Mistral-7B | 0.1874 | 0.1121 | 0.0900 | 0.0990 | 0.6479 | 0.6997 | Regarding the contextual calibration method introduced in [1], we have carefully reviewed the paper. Although the term "calibration" appears in its name, the method is primarily designed to improve task performance across various NLP tasks, rather than to reduce ECE specifically. As such, their objective differs from ours, and the approaches are not directly comparable. [3] When Does Label Smoothing Help? Rafael Müller, Simon Kornblith, Geoffrey Hinton [4] On Calibration of Modern Neural Networks Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger >Q4. A better comparison of the Win rate is between CFT and DPO-generated responses since response qualities may decrease but still outperform the one being compared in the current setting. A: Thank you for the suggestion. We agree that comparing the win rate between CFT and DPO-generated responses provides more information. We have conducted these additional experiments on AlpacaEval dataset, and the results are now included in our revised paper and provided below for reference. |Win Rate | CFT| DPO | RCFT| DPO| |-|-|-|-|-| | Llama-3.1-8B-Tulu | 51.68 | 48.32 | 46.83 | 53.16 | | Vicuna-7B | 46.46 | 53.54 | 50.43 | 49.57 | | Olmo2-7B | 62.48 | 37.52 | 46.12 | 53.88 | | Mistral-7B | 46.96 | 53.04 | 49.81 | 50.19 | Experiments on Alpaca-Eval and Arena-hard are provided in the end of our response to Reviewer USq4. >Q5. Some relevant methods are not discussed: [1,2]. A. We have included both [1] and [2] in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns are mostly addressed. Quick questions for a table in Q4: Why are there two DPO columns with different numbers? --- Reply to Comment 1.1.1: Comment: Thank you very much for your response. To clarify briefly: the first column under DPO shows the win rate compared to CFT, while the second column shows the win rate compared to RCFT. We realized that the previous table layout may have been unclear, so we have slightly revised the structure to present the results more clearly. |Win Rate | CFT vs DPO | RCFT vs DPO | |-|-|-| | Llama-3.1-8B-Tulu | 51.68 vs 48.32 | 46.83 vs 53.16 | | Vicuna-7B | 46.46 vs 53.54 | 50.43 vs 49.57 | | Olmo2-7B | 62.48 vs 37.52 | 46.12 vs 53.88 | | Mistral-7B | 46.96 vs 53.04 | 49.81 vs 50.19 | Here is a more detailed explanation. Following your suggestion that “a better comparison of the win rate is between CFT- and DPO-generated responses,” we conducted the following experiments. Using the AlpacaEval dataset, we generated responses with our CFT/RCFT models and the DPO model. We then used GPT-4 as a judge to determine which response was better in each case. For example, in the first row (using the Llama3.1 model), when comparing CFT and DPO responses, GPT-4 judged 51.68% of CFT responses to be better and 48.32% of DPO responses to be better. These percentages sum to 100%. Similarly, in the comparison between RCFT and DPO, GPT-4 preferred 46.83% of RCFT responses and 53.16% of DPO responses. These results suggest that the generation quality of CFT and RCFT is approximately on par with DPO. Importantly, this also indicates that incorporating CFT does not degrade the alignment quality achieved by DPO.
null
null
null
null
null
null
ToMA: Token Merge with Attention for Diffusion Models
Accept (poster)
Summary: The paper investigates token reduction via submodular optimization. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods are valid, however, having $75\%$ token reduction leads only to $\approx 1.4$ faster inference than the baseline. Please check the question section. Theoretical Claims: The paper relies on the known plain greedy algorithm for maximizing a submodular function. However, I find some information concerning the unmerging part lacking; see the questions section. Experimental Designs Or Analyses: The experimental designs and corresponding analyses are sound, however, as mentioned earlier, when reducing a large amount of tokens, the model does not benefit much in terms of decreasing inference time; see the questions section. Supplementary Material: I went over the entire appendices. Relation To Broader Scientific Literature: The idea of token reduction via submodular is quite innovative and fitting. Such methods could alleviate standard techniques in Deep learning, becoming inseparable tools that allow faster performances with a theoretical backbone (that would work on vast amounts of models and not tailored to work with a specific set of diffusion models). Essential References Not Discussed: The paper references the right papers that are essential to understand the contribution of the paper. Other Strengths And Weaknesses: The strengths of the paper are: 1) Formulating the token reduction as a submodular function maximization problem is fitting. 2) Utilizing the fact that submodular optimization aims to ensure a diverse subset of tokens into the token merging and unmerging is quite fascinating, however, I do have some questions on this matter. 3) The architecture changes done to ensure the full potential of utilizing the token reduction phase, merging and unmerging are necessary and innovative, and efficient from a practical point of view. The weaknesses, however, are: 1) The submodular phase takes $O(N^2d)$ time (as mentioned in section C in the appendix). This time might explain why the method while being faster than the competitors, is not much faster than the baseline model; See questions section. 2) The paper needs polishing; see the comments section. Other Comments Or Suggestions: In what follows, some typos will be listed: 1) In the appendix, please replace line 1071 "Tab ??" with Table 4. 2) In the appendix, please replace line 1076 "Tab ??" with Table 5. 3) Remove line 452. 4) Line 370, second column, I think the authors meant to mention Table 1. 5) Table 2 was never referenced in the paper. Questions For Authors: 1) The authors claim to implement the naive greedy submodular optimization maximization algorithm concerning the facility location problem applicable to GPU. Since the algorithm in itself is iterative, it can not be parallelized. However, did the authors mean to parallel the similarity matrix computation? In addition, did the authors parallel the finding of the item with the maximal marginal gain? 2) What are the values of $N$ and $d$ throughout the experiments? This aims to clarify why $75\%$ does not lead to a higher reduction in inference time. 3) The authors hint that due to diversity in the sampled set (a given byproduct of the submodular optimization process), the resulting $\tilde{A}$ is somewhat orthogonal. Is this claim true? If so, why would it be orthogonal? 4) A follow-up question on the previous: Usually with submodular optimization, we would have a set that contains up to $D$ items. Is this the case also here? This would explain whether the set is forced to have exactly ensure the token reduction ratio -- in such a case, the items in the set may not be independent (from a linear point of view) which then would lead to numerical issues with the psuedo inverse Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your review. We appreciate your feedback and pointing out several typos that need polishing. Due to ICML 2025 regulations, we are currently unable to modify the submitted PDF. However, we will certainly address and correct these issues in the updated version. Regarding your other concerns about efficiency and submodularity, please find our detailed responses below. ***Response***: ***Parallel of Submodular Optimization:*** Thank you for pointing this out. You're correct—the iterative nature of submodular optimization is inherently unavoidable, as each selection step depends explicitly on the previously selected items. However, to maximize parallelism, we indeed parallelize the computation by performing token selection independently across multiple local regions, each selecting tokens with maximal marginal gains. Since the submodular optimization in each local region is independent, this approach significantly reduces iteration time, as submodular optimization is now executed over smaller subsets with fewer tokens to select at each step. ***Values of N and d:*** In our formulation, *N* denotes the sequence length and *d* the channel dimension. In SDXL, as the UNet goes deeper, *N* decreases while *d* increases—for 1024×1024 image generation, typical configurations include [4096, 640] and [1024, 1280]. In FLUX, *N* is composed of both text (512) and image (4096) tokens, totaling 4608, with a fixed channel dimension of 3072. At 75%, ToMA achieves 1.4x speed-up instead of a higher reduction because of its own overhead. Though ToMA introduces significantly less overhead than other token merging methods—as demonstrated in Appendix Section F—its operations still produce a non-negligible runtime cost. Yet, this overhead can be further reduced through engineering efforts such as kernel fusion and custom CUDA optimization. It's also worth noting that the speed-up in FLUX is slightly lower than in SDXL, mainly due to the additional computation required for RoPE reconstruction. This involves gathering original RoPE positions and recomputing positional encodings based on the merged token set to maintain image quality. ***Orthogonal***: Thank you for your question. To clarify, ${\tilde{A}}$ refers to the **merge matrix** computed from attention scores, not the token feature matrix. Since ${\tilde{A}}$ is a non-negative matrix, a strict orthogonality would mean that each source token attends to exactly one destination token, with no overlap. In the approximate case, the overlap across different rows is small—i.e., each source token mainly attends to a unique destination token. This naturally requires the destination tokens to be diverse, or otherwise, a source token may attend to two non-diverse destination tokens and result in overlap. Therefore, **diversity is a necessary condition** for orthogonality or approximate orthogonality in this setting. Moreover, this connects closely to the **facility location objective** used in our submodular selection: it ensures that every source token has at least one highly similar (i.e., representative) token in the selected subset, which means that every source token mostly attends to its representing destination token. ***Submodular Optimization Related**:* Thank you for your question. Indeed, we select a subset of up to D tokens. However, linear dependency issues are unlikely in our setting. Specifically, our pseudo-inverse operation is applied to the merge matrix ${\tilde{A}}$, which is obtained by multiplying the selected token subset with the entire token set. This merge matrix is non-negative and naturally has the largest values along its diagonal (or, more precisely, every row has a distinct index of the largest value), ensuring that the selected tokens remain linearly independent and, thus, preventing numerical instability during the pseudo-inverse computation. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed answers. I, however, have a follow-up question: * Concerning "Parallel of Submodular Optimization": Generating the submodular coreset in parallel on different localities (I assume different subsets of tokens) would lose some cross-similarities between these points. How do you justify this? Are the sets different from each other in terms of vector similarities? --- Reply to Comment 1.1.1: Comment: Thank you for raising the thoughtful question. We appreciate your engagement with our methodology. And below, we clarify the rationale and address your concerns about cross-similarities: ***Local Smoothness in Image Latents:*** In vision latents, tokens corresponding to nearby image patches show stronger similarities than those that are farther apart—a property confirmed by our empirical analysis (*Figure. 3 & 9*). This leads to weaker cross-region similarities, allowing us to process local regions in parallel while still preserving most of the important relationships. ***Positional Embeddings:*** Positional embeddings (i.e., RoPE) further reinforce this locality prior by explicitly decaying similarity scores with increasing token geometric distance. This means the cross-region similarities we ignore are already heavily attenuated by the model's architecture itself. Our parallelization approach simply aligns with this built-in inductive bias. ***Regularization Effect:*** It’s worth noting that locality constraint also provides beneficial regularization. By enforcing balanced token selection across different image regions (≤ $k$ tokens per local region), we prevent the coreset from over-concentrating on just the most globally salient features. This leads to more diverse and representative token selections in practice. ***Theoretical Justification:*** At its core, our approach approximates the full similarity matrix by focusing on local region similarities only, ignoring the aforementioned small cross-region terms. And formally, this is solved through the submodular optimization under the partition matroid constraint, where at most $k$ tokens are selected from each region and each region from a partition of the groundset.
Summary: The authors present a token merge algorithm for image diffusion models, based on the theory of submodular optimization, operations that are more friendly to GPU, and some extra tricks to further enhance the speedup. The proposed method first selects destination tokens that are most representative and then merges these tokens considering the attention and locality. The experimental results show speedup over the unoptimized baseline and several other token merging baselines. The absolute speedup on image generation is around 30% when reducing 75% of the token, which is a notable improvement over the baselines compared. ## update after rebuttal The rebuttal solved all my concerns, therefore, I raised the score to 4. Claims And Evidence: Most of the claims are supported. For details, see weakness. Methods And Evaluation Criteria: The method and benchmark make sense. Theoretical Claims: Yes, they are correct. Experimental Designs Or Analyses: Yes. Supplementary Material: Reviewed the submitted source code. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The overall quality of the paper is good with noticeable improvement over the baseline together with sufficient qualitative comparison. However, the paper can be polished by addressing the following weaknesses: 1. Table 2 is apparently included in a rush with significantly less content compared to Table 1. Thorough valuation and comparison on DiT-based diffusion model is crucial for readers to see the merit of this paper. 2. ToMA stripe usually performs worse in terms of the metrics despite the improvements in runtime, and ToMA tile provides comparable quality but at the cost of overhead, which makes the reader wondering what the real benefit of using these two variants is. And there is the ToMA* which leads to significant degradation in quality. 3. In table 3, the proposed method is only compared with ToDo at 75% of token reduction. 4. The authors are suggested to include runtime breakdown and comparison to properly prove that the designed method is GPU-friendly and superior over the prior works. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *Dear Reviewer*, Thank you for taking the time to review our work. We have addressed your concerns in detail in the responses below. ***Response***: ***Lack of Comparison on DiT-Based Diffusion Model:*** Thank you for raising this point. Table 2 includes fewer comparisons with other methods in the DiT-based model setting because existing approaches often produce mosaics or noise—they are not compatible with both RoPE positional embeddings and the specialized DiT architecture (Joint and Single Transformers). As a result, we focus on Table 2 for analyzing our ToMA variants. To address your concern, we include comparisons with the ToMA_tile implementation. We omit the ToMA_stripe, as stripe is incompatible with the 2D RoPE. Additionally, we report results on the RTX8000, excluding V100 due to its insufficient memory for the FLUX model. | Ratio | Method | FID↓ | CLIP↑ | DINO↓ | Sec/img (RTX 8000)↓ | Sec/img (RTX 6000)↓ | | --- | --- | --- | --- | --- | --- | --- | | 0.00 | Baseline | 31.56 | 29.03 | **0** | 59.20 (0%) | 21.03 (0%) | | 0.25 | ToMA | **30.80** | **29.07** | 0.043 | **56.70 (-4.2%)** | **20.14 (-4.2%)** | | | ToMA_tile | 31.49 | 29.05 | **0.021** | 57.47 (-2.9%) | 20.78 (-1.2%) | | 0.50 | ToMA | **31.70** | 29.09 | 0.051 | **51.44 (-13.1%)** | **18.58 (-11.6%)** | | | ToMA_tile | 32.95 | **29.19** | **0.032** | 53.61 (-9.4%) | 19.61 (-6.8%) | | 0.75 | ToMA | **33.39** | 28.98 | 0.064 | **49.83 (-15.9%)** | **16.12 (-23.4%)** | | | ToMA_tile | 33.88 | **29.34** | **0.045** | 49.86 (-15.8%) | 18.30 (-12.9%) | ***ToMA Stripe, ToMA Tile, and ToMA*:*** Thank you for pointing this out. We ultimately selected the tile method as it aligns well with the inherent hidden states locality, and in FLUX scenarios, it complements the 2D RoPE structure. We believe ToMA_tile can be further optimized—e.g., via contiguous tile-shaped memory access and kernel fusion for (un)merging operation with the attention computation. We believed that through optimization, the ToMA tile could achieve acceleration comparable to the ToMA_stripe. However, such optimization poses a significant engineering challenge, and to keep a fair comparison with baselines, we defer this to future work. Thus, we introduced the ToMA_stripe as a practical fallback due to its lower overhead and immediate applicability, despite slightly weaker performance in quality metrics. As for ToMA*, it is an exploratory variant aimed at reducing overhead by applying (un)merging once at the model level rather than per transformer block. While this significantly improves runtime, it currently poses challenges in preserving generation quality, suggesting a promising direction for future architectural or training improvements. ***ToDo:*** The reason is that the implementation of ToDo only supports merging exactly 4 tokens into 1, inherently limiting the merge ratio to 75%. Therefore, we report results for ToDo only at the 75% merge ratio. ***Runtime Breakdown:*** Before presenting our result, we would like to clarify that the reported runtime does not fully reflect the actual time consumed, as we explored several timing measurement methods, each with its limitations. Torch Profiler introduces overhead, while using cuda.Event to measure individual components adds extra synchronization, potentially disrupting concurrency. Both inflate the runtime measured. To address this, we adopted a method where we selectively disabled specific components via commenting and measured the time difference. This approach yields results that are more consistent with theoretical expectations and offers a more reasonable estimation of the runtime impact. We have included a comparison of merge and unmerge operation runtimes for both ToMeSD and ToMA in Section F of the Appendix. Additionally, the table below provides a detailed runtime breakdown. As shown, ToMA introduces significantly less operational overhead compared to ToMeSD. However, when the computation time is significantly reduced, the relative impact of overhead becomes more pronounced, resulting in a speedup that is smaller than the theoretical gain suggested by FLOPs. The runtime breakdown for FLUX closely aligns with that of SDXL. | Time Category | Subcomponent | SDXL + ToMA (s, %) | SDXL + ToMe (s, %) | | --- | --- | --- | --- | | **Computation Time** | | **0.550 (45.32%)** | **0.550 (24.54%)** | | | SelfAttention | 0.151 (12.45%) | 0.151 (6.74%) | | | CrossAttention | 0.107 (8.80%) | 0.107 (4.77%) | | | FeedForward | 0.292 (24.07%) | 0.292 (13.03%) | | **Token Merge Operation** | | **0.191 (15.73%)** | **1.168 (52.11%)** | | | ComputeMerge | 0.015 (1.21%) | 0.419 (18.67%) | | | Merge | 0.090 (7.44%) | 0.431 (19.25%) | | | Unmerge | 0.086 (7.10%) | 0.318 (14.19%) | | **Other Overhead** | | **0.472 (38.91%)** | **0.523 (23.35%)** | | **Total Block Time** | | **1.214 (100%)** | **2.241 (100%)** |
Summary: This paper introduces ToMA (Token Merge with Attention), a GPU-optimized token merging method for transformer-based diffusion models, addressing inefficiencies in existing token merging techniques such as ToMeSD and ToFu. ToMA achieves its efficiency improvements through submodular optimization for token selection and attention-based linear projections for merging and unmerging. Experiments demonstrate that ToMA achieves 24% speedup on SDXL and 23% on Flux.1-dev while maintaining image quality, outperforming previous work like ToMeSD and ToFu. Claims And Evidence: Yes, most of the claims are supported by experiments, such as ToMA achieves speedups by leveraging GPU-friendly operations and submodular token selection works reasonably. Methods And Evaluation Criteria: Yes. The paper evaluates the proposed method on both SDXL and Flux against ToFu and ToMeSD. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. Overall the experimental designs are sound. However, it lacks detailed computational complexity breakdown, such as memory usage, FLOP reductions, or additional latency savings beyond just inference time. Supplementary Material: Yes. I reviewed the qualitative results of the paper. Relation To Broader Scientific Literature: The paper follows the idea of previous token reduction methods such as ToMeSD and ToFU but optimizes for GPU execution. It also extends submodular optimization into token selection for generative models. Essential References Not Discussed: The paper misses the discussion of sparse transformer [1], which also leverages locality for token reduction. [1] Child et al. Generating Long Sequences with Sparse Transformers. Arxiv 2019. Other Strengths And Weaknesses: Strengths: - ToMA is shown to work efficiently with modern GPU optimizations like FlashAttention2, avoiding memory bottlenecks. - Empirical results confirm the speed improvements of ToMA over prior methods on SDXL and Flux. Weaknesses: - At 0.25 merge ratio, the speedup is minimal, raising concerns about whether ToMA is beneficial in scenarios requiring high image quality. - The paper reports inference speed but lacks FLOP breakdowns and memory usage analysis. It remains unclear where efficiency gains come from. Other Comments Or Suggestions: N/A. Questions For Authors: - Would ToMA benefit from sparse attention techniques like Longformer? - What is the computational overhead of ToMA at extreme merge ratios (>90%)? Does merging too aggressively introduce additional inefficiencies or degrade quality significantly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Dear Reviewer*, We appreciate your insightful comments and the time you've taken to evaluate our work. Below are our responses to your concerns: ***Concerns about High Image Quality and Little Speed Up:*** At a ratio of 0.25, the acceleration is limited primarily because the reduction in sequence length is not substantial, which naturally results in limited speedup. Additionally, some unavoidable overhead remains, including operations within the transformer block that are not accelerated, as well as the overhead introduced by our own method (though relatively small, it still contributes). That said, further speedup could be achieved through engineering optimizations, such as low-level implementations. For instance, fusing the kernel to merge directly into the attention computation could significantly improve efficiency. However, in order to maintain a fair comparison with the baseline and other existing methods (which do not achieve the same level of speedup as ToMA), we defer such engineering enhancements to future work. ***FLOP, Memory, and Efficiency:*** The efficiency primarily stems from the reduced sequence length in attention inputs achieved by merging, which significantly lowers the computational load of the attention mechanism. **FLOP**: Our FLOP analysis focuses on the transformer blocks, specifically the projection and attention computations. In both FLUX and SDXL, ToMA achieves a substantial FLOP reduction, with a maximum improvement of approximately 3.4×. The overhead introduced by ToMA operations—including submodular token selection, merge token computation, and (un)merge—is negligible compared to the overall FLOP savings. ### FLOP Comparison Table | Model | Example Layer (Seq Len × Dim) | Original FLOPs(GFLOPs) | ToMA (0.5) FLOPs(GFLOPs) | ToMA Operation FLOP(GFLOPs) | Reduction | | --- | --- | --- | --- | --- | --- | | **FLUX** | 4608 × 3072 | 520 | 225 | 1.01 | ~2.3× | | **SDXL** | 4096 × 640 | 106 | 31.54 | 0.42 | ~3.4× | | **SDXL** | 1024 × 1280 | 30 | 12.76 | 0.06 | ~2.4× | --- **Memory**: Regarding memory usage, please check the table below showing the max memory allocated and reserved during the inference process. Experiments in both FLUX and SDXL settings indicate minimal impact on memory allocated or reserved, suggesting our method incurs only minor additional memory allocation. ### Combined Memory Usage Table (FLUX vs ToMA vs ToMA_tile) | Metric | FLUX | **ToMA** | | | **ToMA_tile** | | | | --- | --- | --- | --- | --- | --- | --- | --- | | | | 0.25 | 0.5 | 0.75 | 0.25 | 0.5 | 0.75 | | **Max Allocated (MB)** | 34640.07 | 34743.86 | 34709.85 | 34675.21 | 34647.18 | 34646.66 | 34642.15 | | **Max Reserved (MB)** | 37002.00 | 37050.00 | 36976.00 | 36954.00 | 37054.00 | 37006.00 | 36950.00 | ### Combined Memory Usage Table (SDXL vs ToMA vs ToMA_stripe vs ToMA_tile) | Metric | SDXL | **ToMA** | | | **ToMA_stripe** | | | **ToMA_tile** | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | | 0.25 | 0.5 | 0.75 | 0.25 | 0.5 | 0.75 | 0.25 | 0.5 | 0.75 | | **Max Allocated (MB)** | 10720.92 | 10930.77 | 10856.75 | 10796.86 | 10722.08 | 10718.62 | 10718.23 | 10725.21 | 10720.06 | 10718.67 | | **Max Reserved (MB)** | 14150.00 | 14460.00 | 14260.00 | 14130.00 | 14114.00 | 14188.00 | 14222.00 | 14158.00 | 14158.00 | 14182.00 | The efficiency comes from the shortened attention sequence length enabled by token merging, allowing ToMA to significantly reduce FLOPs while introducing negligible additional memory overhead compared to the baseline. ***Sparse Attention:*** The sparse attention mechanism naturally complements token merging. The underlying strategies of sparse attention and token merging are fundamentally orthogonal: token merging explicitly reduces the input length of the attention computation, whereas sparse attention selectively attends to subsets of tokens. As these methods operate independently, they can easily be combined—sparse attention can directly replace flash attention within ToMA to further improve efficiency. Moreover, many sparse attention methods like Longformer typically employ a sliding window that aligns with the local structure exploited by ToMA and can be directly utilized to restrict both merging and attention operations to sliding window regions, further enhancing computational efficiency. ***Overhead and Image Quality at Extreme Merge Ratio:*** In general, we can achieve speedup by merging more tokens. Specifically, with a merge ratio above 90%, ToMA achieves approximately 4.1 seconds generation time on SDXL and 15.05 seconds on FLUX. However, at such high ratios, we observe a noticeable degradation in image quality as the number of tokens processed by the transformer becomes very limited, directly impacting the output. That said, even at these extreme ratios, we did not observe any inefficiencies significant enough to offset the overall speedup in generation time.
null
null
null
null
null
null
null
null
Enhancing Diversity In Parallel Agents: A Maximum State Entropy Exploration Story
Accept (poster)
Summary: This paper focuses on generating diverse experience for policy gradient algorithms in reward-free settings through the use of entropy maximisation and separate parallel policies. The method proposed is Policy Gradient for Parallel States Entropy Maximization (PGPSE). The empirical results on two grid-based environments show that more diverse policies lead to a higher entropy of collected states which corroborates the theoretical findings. Claims And Evidence: While there is theoretical evidence for this method, the empirical evidence is severely lacking for multiple reasons: 1. In section 5 the authors state: “As stated before, a core motivation of this work is addressing the problem of exploration in practical scenarios, where a strong parallelization of the environment is used to overcome the hardness to sample from complex simulators or even physical instantiations of the agents.” If a core motivation of this work is practical applications, then why are the environments so simple? They only have 43 states, which is significantly simpler than the common benchmark environments used in similar work [mujoco,atari,gym]. 2. Why was training only shown for 2, 4 and 6 agents? Especially since a higher K’ leads to higher state entropy and larger support size in Figure 1 (for stochastic environments). It would be interesting to see the trend of the gap between single agent and multi-agent as the number of agents grows. 3. I would like to see a small discussion around the higher variance introduced with more agents. Especially in the stochastic environments, as it seems more agents may not be statistically significantly better than a single agent in those environments given the high variance. Methods And Evaluation Criteria: Not comparing to other baselines makes it hard to place this work within the literature. For example, Hazan, Elad, et al [maxent] compared to a random policy, it would have been interesting to compare to MaxEnt and a random policy. Another simple baseline that seems to have been left out is N distinct agents learning without the entropy objective. Additionally, while multiple seeds were used for this work, I’d suggest that following the advice of Argarwal et al. [rliable] and doing a more rigorous evaluation. Theoretical Claims: I did not check the correctness of the proofs. Experimental Designs Or Analyses: Yes, experimental design is valid, except for the fact that the environments are simple. Supplementary Material: Yes, section B and onwards. Relation To Broader Scientific Literature: It is related to a wealth of exploration literature and reward-free reinforcement learning, specifically those focusing on maximum entropy learning. It is also related to policy gradient methods in general, as in most cases rollouts are done in parallel, and thus this method could be applied. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: **Strengths:** I think the application to offline RL is very interesting and should be further emphasised throughout the paper and tested on harder problems. **Weaknesses:** I feel the significance of this work is lacking. The authors mention a large amount of potential future research and I believe that this would need to be included for this paper to be considered for ICML.. As it stands, the environments tested are very simple and the paper lacks baselines, which makes it impossible to place in the literature. Additionally, the clarity of the writing is poor and is a significant weakness of this work. Other Comments Or Suggestions: The paper is poorly written, containing numerous grammatical and formatting errors. I cannot enumerate them all here, but here are a few examples: 1. Figures 2 and 3 lack labels for what the colours are referring to. 2. Page 1 second last paragraph: performances -> performance 3. Page 4 section 4: “Main Takeout” -> “Main Takeaway” 4. Page 4 section 5: “hardness” -> “difficulty” 5. Page 4 section 5: “our attention now shift” -> “our attention now shifts” 6. Page 7 section 6 (Offline RL): “Can dataset collected with parallel maximum entropy agents benefit offline RL algorithms on those data?” -> “Can datasets collected with parallel maximum entropy agents benefit offline RL algorithms” ----- **References** [mujoco] Todorov, Emanuel, Tom Erez, and Yuval Tassa. "Mujoco: A physics engine for model-based control." 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2012. [atari] Bellemare, Marc G., et al. "The arcade learning environment: An evaluation platform for general agents." Journal of artificial intelligence research 47 (2013): 253-279. [gym] Brockman, Greg, et al. "Openai gym." arXiv preprint arXiv:1606.01540 (2016). [maxent] Hazan, Elad, et al. "Provably efficient maximum entropy exploration." International Conference on Machine Learning. PMLR, 2019. [rliable] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021): 29304-29320. Questions For Authors: Is there a reason for why the gap in state entropy seems to decrease as you increase the number of agents? In Figure 1, it seems that the gap between 6 agents and K’ = 6 is lower than 2 agents and K’ = 2. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and effort in evaluating our work, and we are grateful for the opportunity to provide further clarifications. We also thank the reviewer for highlighting grammatical and formatting issues, which we will address in the final version. Additionally, we are pleased that our contribution on applying the exploration approach to offline RL was appreciated. **Weaknesses: I feel the significance of this work is lacking.** We hear the reviewer's concerns about the simple domains and baselines. We will address them in detail below. First, we want to spend a few words to get on the same page with the reviewer on the nature of this work. The core contributions of this paper are *conceptual* and *theoretical*: The formulation of a reward-free exploration objective tailored to the parallel MDP setting (Section 3), its theoretical characterization through novel concentration bounds (Section 4 and Theorem 4.1), the design of a specialized policy gradient procedure (Section 5). While we corroborate the latter contributions with a *preliminary* empirical analysis, this is not meant to close the gap between foundations and application in the real world, which we believe requires a substantial amount of additional work for not just a follow-up but a series of papers. We kindly ask the reviewer to also weigh our main contributions in their evaluation and to reward the paper for its seminal potential. **If a core motivation of this work is practical applications, then why are the environments so simple?** Please note that we do not claim the paper provides a solution for real-world applications, but that this research line is *motivated* by real-world applications. We do not aim to close the gap with applications with our work, which, being just the first in this direction, aims to introduce the problem formally and to provide the theoretical foundations for future works. **Why was training only shown for 2, 4, and 6 agents?** Given the gridworld’s complexity, a few agents are sufficient for successful navigation, and adding more does not significantly improve performance due to state space constraints. In experiments on more complex scenarios, we expect a much larger number of learners to be beneficial for the exploration. For completeness, we will extend Figure 7 in the Appendix to show performance with more agents, highlighting how increasing their number maximize the objective until reaching a plateau. **I would like to see a small discussion around the higher variance introduced with more agents.** Variance in performance is an important aspect, which we will discuss further in the paper. While a single agent with more trajectories can achieve similar expected performance, parallel learners naturally exhibit smaller variance around the mean. This supported by Theorem 4.1 and the experiments in Figures 2 and 3 highlight the benefits of parallel training. A single agent tends to explore the entire state space, leading to higher variance in state distribution and greater sample complexity compared to the parallel case, in which the policies are less stochastic. **Is there a reason why the gap in state entropy decreases as the number of agents increases?** In designing the experiments, we aimed for a fair comparison. Comparing parallel agents, each playing one trajectory at a time against a single agent with the same interaction budget, would overestimate the parallel learners' advantage. Instead, we chose a more challenging setup where the single agent has a significantly larger interaction budget than each individual parallel agent. This setup allows the single agent to develop a stronger policy, reducing the gap in state entropy. However, when considering the offline RL setting, the advantage of parallel learning becomes more evident due to the higher variance induced by the policy of the single agent that need to visit more states, respect to a specialized parallel policy that concentrate faster. **Baselines makes it hard to place this work within the literature.** As our paper is the first to address this specific problem of state entropy maximization in parallel MDPs, we do not have a direct baseline with which to compare. However, we did compare against a *random policy* (gray bar in Figure 3 and 4) and a *single policy*, which can be seen as replicas of a MaxEnt algorithm across the parallel MDPs (the objective is the same as MaxEnt, although we used our implementation instead of the one of the original paper). Those are the baselines that the reviewer also mentioned, and we believe they are the most relevant. We will make this information clearer in the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking the time to answer my questions. They have cleared up my misunderstanding around practical applications and I see that my recommended baselines, although unlabeled, were already present in the paper. I also appreciate the clarity around the higher variance with more agents. >few agents are sufficient for successful navigation Please mention this in the paper. The reason I asked for this is not only to see how well PGPSE scales, but rather to see how well a single agent approach could do given more transitions as it seems that this performance scales well with $K$. >we did compare against a random policy (gray bar in Figure 3 and 4) Thank you for pointing this out, but given that figure 3 is unlabeled, how is a reader supposed to tell that the gray bar is a random agent? Additionally, it would be interesting to see the random agent curves in Figure 2. >single policy is equivalent to replicas of MaxEnt Could you please make this more clear in the experiment section? >The core contributions of this paper are conceptual and theoretical While I acknowledge that the theoretical proofs are a core contribution of this paper, it does not absolve the authors from performing adequate experiments to validate their proposed method, which is also listed as a core contribution in the introduction. I do not feel that what I am asking is unreasonable as I see other works in this field which are able to have both extensive proofs and experiments on complex environments [1,2]. In fact, some of these works even label experiments using robots simulated in MuJoCo (a much more complex environment than the ones present in this paper) as “preliminary” [1]. Some of these similar works also use a maze environment as done in this paper, but explicitly label them as “simple” environments or use them to easily display learned behaviours, but not as the main benchmark [2,3]. To increase my score, ideally I want to see experiments on more complex environments aligned with the literature, however I realise this may be infeasible in such a small amount of time. Thus, I may be willing to increase my score if the significance of the current experiments are significantly downplayed, for example they should be presented as "preliminary" in the introduction, experiments and conclusion sections. It should also be clearly acknowledged when mentioning environment details that these are simple environments and are used as a proof of concept. [1] Hazan, Elad, et al. "Provably efficient maximum entropy exploration." International Conference on Machine Learning. PMLR, 2019. [2] Eysenbach, Benjamin, and Sergey Levine. "Maximum entropy RL (provably) solves some robust RL problems." arXiv preprint arXiv:2103.06257 (2021). [3] Eysenbach, Benjamin, et al. "Diversity is all you need: Learning skills without a reward function." arXiv preprint arXiv:1802.06070 (2018). --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are happy to see that our replies helped clarify important aspects of the paper. We thank the reviewer for raising the points in the first place and for giving us important feedback on what is not clear in the current writing. **We will incorporate all of these clarifications in an updated and improved version of the manuscript**. Regarding the **experiments**, it was not our intention to overstate their scope. This is why we named the experimental section "Empirical Corroboration" where "we report numerical experiments in simple yet illustrative domains" (lines 247-249). We acknowledge that the introduction is not as clear on the nature of the experiments. **Following the reviewer's suggestion**, we will change "We provide numerical experiments to corroborate our findings[...]" into "We provide numerical experiments in illustrative domains for a preliminary validation of our findings[...]" at line 107 of the introduction. We agree that a thorough empirical evaluation in **more challenging domains** would make for an even stronger submission. However, we note that **it is not standard in previous publications in the same area**. Works that are mainly conceptual and theoretical, e.g. Mutti et al 2020a, Guo et al 2021, Mutti et al 2022a, Tiapkin et al 2023, Zamboni et al 2024ab, only reports experiments in gridworld or chain MDPs, which are comparable with the domains we considered. Hazan et al 2019 is actually an exception, although their Mujoco experiments have been shown to lead to extremely sub-optimal results (see Mutti et al 2021, Liu and Abbeel 2021 for comparison with ``practical´´ algorithms). Regarding the other mentioned papers: [2] is tackling a different problem than ours (entropy of the policy, not the state visitation), whereas [3] is somewhat related (see our rebuttal to R. Fend above) but mostly empirical. Finally, we hope the additional clarifications have convinced the reviewer on the value of our work and to increase their score accordingly. [Mutti et al. An intrinsically-motivated approach for learning highly exploring and fast mixing policies, 2020] [Guo et al. Geometric entropic exploration, 2021] [Mutti et al. The importance of non-Markovianity in maximum state entropy exploration, 2022] [Tiapkin et al. Fast rates for maximum entropy exploration, 2023] [Zamboni et al. How to explore with belief: State entropy maximization in pomdps, 2024a] [Zamboni et al., The limits of pure exploration in pomdps: When the observation entropy is enough, 2024b] [Mutti et al., Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate, 2021] [Liu and Abbeel, Behavior from the void: Unsupervised active pre-training, 2021]
Summary: This paper studies how parallel training facilitates exploration in reinforcement learning. The major result is that parallel exploration can not only obtain batched acceleration compared to single-agent ones, but it’s also possible to further improve sample complexity though diversity-driven policy design. The results come from a novel and careful analysis of parallel exploration procedure and are backed up through sufficient empirical experiments. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This result seems to be of broader interest in RL as more and more vectorized/parallel environment simulation is available in recent literature. Essential References Not Discussed: Not I aware of. Other Strengths And Weaknesses: ### Strengths This paper proposed a novel problem: if we can run n agents in parallel, how should we maximize the exploration (i.e., state occupancy entropy)? Turns out the answer is not as trivial as running the same "uniform" policy for n agents simultaneously. A better policy is to run $n$ different policies that each maximize the exploration for a sub-region, and in aggregation, we achieve a uniform distribution overall. The reason is that running a sub-region can give us a better sample complexity in terms of estimating state transitions and probabilities. After seeing the explanation, it feels obvious but the mathematical derivations are carried out satisfyingly, in particular the decomposition insights. What is more surprising is the policy gradient method proposed by the authors. Turns out that the gradient of the entropy objective for each single agent can be derived in a distributed manner (with aggregated empirical distribution). Therefore a practical algorithm can be carried in terms of maximizing exploration in a non-trivial way. The paper is well-written. The model is clear and the analysis is intuitive, and experiments are conducted satisfyingly. ### Weakness One question is that if the motivation is for the speed, how gradient descent step slow down the computation? This is not discussed in the paper. Other Comments Or Suggestions: 1. For clarification, in line 204, the author claims: As a result, they induce distributions with ‘lower’ entropy compared to a single policy covering the entire space. Can the authors elaborate on how to get this conclusion from Theorem 4.1? 2. Line 252, typo: double "the" Questions For Authors: if the motivation is for the speed, how gradient descent step slow down the computation? Is that possible that we can synchronize the gradients once in a while so that parallelization can be maximized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and effort in evaluating our work and are grateful for the opportunity to provide further clarifications. We also thank the reviewer for pointing out the typo errors; we will address them in the final version. **If the motivation is for the speed, how gradient descent step slow down the computation?** We are not fully sure to understand reviewer's question here. We provide a tentative reply below. If the reviewer feel we are not addressing their point, we will be more than happy to provide further considerations. The discussion on the cost of the gradient calculation is an excellent point. We have not analyzed the computational cost of calculating the gradient at each step, which is negligible in our experiments. However, in more complex domains, a vectorized calculation of the gradient (which is what the reviewer suggests?) will definitely be beneficial from the computational cost perspective. We will provide this important consideration in the manuscript. **As a result, they induce distributions with ‘lower’ entropy compared to a single policy covering the entire space. Can the authors elaborate on how to get this conclusion from Theorem 4.1?** We acknowledge that we may not have fully clarified the use of the findings from Theorem 4.1, thus we will provide extended intuition in a revised version of the manuscript. We designed Equation 1 to intrinsically motivate each agent to reduce the portion of the state space it explores, encouraging diversity. This leads to an induced empirical state distribution with lower entropy due to the small size of the support of the states visited by each single agent. As outlined in Theorem 4.1, a lower entropy state distribution means that fewer samples are needed to realize the target distribution approximately.
Summary: This paper proposes an exploration framework for parallel agents with state entropy maximization and an analysis of the framework. They showed on tabular environments that parallel exploration covers the state space better than single-agent exploration with the same compute budget. And datasets collected by parallel agents also help post offline training. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The derivation In lines 240-247 needs further explanation. I did not check Theorem A.2. Experimental Designs Or Analyses: I reviewed the experiments and they appear sound and well-reasoned. Supplementary Material: I read the whole appendix except Theorem A.2. Relation To Broader Scientific Literature: This paper provides proof-of-concept analysis for parallel exploration, which is relatively new and important to the key exploration problem for RL. Essential References Not Discussed: Entropy-based diversity is not something new for multi-agent RL. See [1] Lupu, Andrei, et al. "Trajectory diversity for zero-shot coordination." [2] Zhao, Rui, et al. "Maximum entropy population-based training for zero-shot human-ai coordination." Other Strengths And Weaknesses: The paper is well-written. The experiments provide strong support for the hypothesis that parallel exploration is more efficient. While this work focuses on tabular environments, it would be interesting to see if the algorithm and analysis can be extended to a larger state space possibly with function approximation. The hypothesis that parallel exploration is more efficient because each agent can focus on a smaller region is interesting and intuitive. This is also insightful for future works on exploration. I believe it is true, but I would like to see more support for the hypothesis. Other Comments Or Suggestions: No. Questions For Authors: The ``Main Takeout’’ section argues that each agent can focus on different regions. Why are the agents incentivized to be different rather than all become the same policy with uniform coverage over states? Is it because if the entropy of each agent’s distribution is low, then the empirical estimation error will be lower? Is the efficiency of parallel exploration better than single-agent exploration shown by sample efficiency or convergence rate? 2. In parallel exploration, how does the entropy of each individual dataset look like and how different (e.g., KL divergence) are the datasets between different agents? 3. In line 240-247, why can the log tricks be applied when $d_p$ is the mixture distribution, which does not only depend on $\pi_i$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and effort in evaluating our work, and we are grateful for the opportunity to provide further clarifications. **Why are the agents incentivized to be different rather than all become the same policy with uniform coverage over states? Is it because if the entropy of each agent’s distribution is low, then the empirical estimation error will be lower? Is the efficiency of parallel exploration better than single-agent exploration shown by sample efficiency or convergence rate?** Precisely, the reviewer is right: If each agent's policy has low entropy, then the empirical realization will concentrate faster to the target distribution. This finding is supported by Theorem 4.1. Since the entropy term appears in the numerator of the samples lower bound, we will need fewer samples (smaller *n*) to concentrate around the target distribution. Even if a policy with uniform expected coverage over the states exists (it does not typically), a single realization from this policy may not have uniform coverage. Appropriately specialized policy can still be preferable. Consider a two-room gridworld with two specialized parallel agents, each assigned to a specific room. In a single realization, the state distribution will be uniform because each agent deterministically explores its designated room. In contrast, a single agent following a maximum entropy policy has a higher variance in its state distribution. This means that it is not guaranteed to visit both rooms in a single realization, as its exploration is probabilistic rather than deterministic. **In parallel exploration, how does the entropy of each individual dataset look like and how different (e.g., KL divergence) are the datasets between different agents?** Thanks to the insightful question posed by the reviewer, we can further clarify that under this objective formulation, each learner is intrinsically encouraged to minimize entropy along its own trajectory. This naturally leads to the emergence of specialized policies across different regions of the state space, as illustrated in Figure 10. Indeed in the same figure where the action distributions are plotted, in the parallel case each agent tends to follow a more deterministic policy, naturally dividing the state space among the *m* agents. In contrast, a single agent aiming to maximize the entropy of its state distribution spreads its actions more uniformly. **In line 240-247, why can the log tricks be applied when $d_p$ is the mixture distribution, which does not only depend on $π_i$?** Since agents have independent policies, the gradient of each agent’s policy with respect to its parameters depends only on the states it has visited itself. Specifically, changing the policy parameter $θ_i$ of the *i*-th agent does not influence the trajectory generated by others. **Missing Reference** Regarding the references, we thank the reviewer for pointing out the interesting papers about coordination, which we will mention in an updated version of the manuscript. Below we explain how they differ from our work: - **Trajectory Diversity for Zero-Shot Coordination:** We note that the paper addresses the problem of zero-shot coordination in a setting that is related to MARL. In MARL, multiple agents are acting in the *same* environment, where state transitions depends on the joint action $a_t = (a_{1t}, ..., a_{kt})$. In contrast, in our setting, multiple agents interact with *independent* copies of the environments, where the trajectories experienced are also independent. Moreover, even if in the *Diversity* objective considered in Section 4.2 and ours have some similarity in nature, their objective is not intended to maximize entropy, but only diversity among the policies. We appreciate the reviewer’s suggestion and will cite the paper in the final version to clarify our position. - **Maximum Entropy Population-Based Training for Zero-Shot Human-AI Coordination:** As in the previous example, the main difference with this work is related to the construction of the environment. The definition of the *Two-Player MDP* positions this paper in the MARL setting, where the agents create dependent trajectories. For completeness in positioning, our paper will include it within our reference list.
Summary: This paper investigates how to effectively maximize state entropy exploration in parallel agent settings. The authors propose a framework where multiple agents, each operating in separate environment replicas, are trained to collectively maximize the entropy of their visited state distribution while promoting diversity among their exploration strategies. They introduce a parallel learning objective that explicitly balances individual agent entropy with inter-agent diversity, and develop a policy gradient algorithm (PGPSE) to optimize this objective. The paper includes theoretical analysis on concentration properties showing that parallel agents with diverse policies can achieve faster convergence to high-entropy distributions compared to single agents. Experimental results on gridworld environments demonstrate that the proposed approach outperforms single-agent baselines in terms of state entropy, support size, and performance in downstream offline RL tasks. Claims And Evidence: The paper's claims about the benefits of parallel exploration with diverse agents are generally supported by the theoretical and empirical evidence presented. However, the claim about the superiority of their approach over existing methods is not fully supported due to limited comparison with relevant baselines. While they show improvement over single-agent exploration and random policies, they don't compare against other methods specifically designed to promote diversity among agents (like DIAYN or diversity-promoting MARL approaches). This is a limitation in evaluating the novelty and effectiveness of their contribution. Methods And Evaluation Criteria: The proposed methods for parallel state entropy maximization make sense for the problem at hand. The evaluation criteria focusing on normalized entropy, support size, and downstream offline RL performance are appropriate metrics for exploration quality. The environments used (gridworlds with different complexity levels) allow for clear visualization and interpretation of results. However, the paper only consider simple discrete gridworld tasks and did not consider more challenging or continuous domains. The evaluation would be more convincing if it included more challenging environments beyond gridworlds and compared against state-of-the-art exploration methods that explicitly promote diversity, such as DIAYN or diversity-centered MARL approaches like "Celebrating Diversity in Shared Multi-Agent Reinforcement Learning." Theoretical Claims: The theoretical analysis appears sound. Experimental Designs Or Analyses: The experimental design is generally sound but limited in scope. However, the experiments are confined to relatively simple gridworld environments, and the paper lacks comparison with more relevant baselines that specifically address diversity among agents. This makes it difficult to assess the true novelty and contribution of the proposed approach relative to existing methods. Supplementary Material: Yes, I reviewed all the supplementary material, especially the environment details and the additional experimental results. Relation To Broader Scientific Literature: The paper builds upon prior work on state entropy maximization for exploration in reinforcement learning, particularly in the reward-free setting. It extends this concept to parallel settings, which is a relevant direction given the increasing use of parallelization in modern RL systems. However, the paper doesn't adequately position itself relative to literature on diversity-promoting exploration strategies. Particularly missing is comparison with methods like DIAYN which explicitly maximizes diversity among skills/policies, or diversity-centered MARL approaches like "Celebrating Diversity in Shared Multi-Agent Reinforcement Learning" which explicitly aims to increase diversity among agents' behaviors. Essential References Not Discussed: It is quite appropriate, and it will be good to discuss Celebrating Diversity in Shared Multi-Agent Reinforcement Learning. Other Strengths And Weaknesses: Strengths: - The theoretical analysis of concentration properties provides useful insights into why parallel diversity is beneficial. - The formulation of the parallel learning objective is elegant and intuitive. - The visualization of learned policies and datasets helps in understanding the behavior of the method. Weaknesses: - Limited comparison with relevant baselines that specifically address diversity among agents. - Experiments are confined to relatively simple gridworld environments. Other Comments Or Suggestions: - Consider expanding the experimental evaluation to include more complex environments. - The visualization in Figure 10 showing the different policies learned by parallel agents is interesting, but could benefit from more quantitative analysis of the diversity. Questions For Authors: - Why didn't the authors compare the proposed approach with diversity-promoting methods like DIAYN or diversity-centered MARL approaches? Such comparisons would provide stronger evidence for the novelty and effectiveness of the approach. - How would the method perform in more complex environments beyond gridworlds? The current environments are too simple to fully demonstrate the benefits of parallel diversity. - Have the authors considered extending your approach to continuous state and action spaces, which are common in real-world applications like robotics? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and effort in evaluating our work and greatly value their insightful comments and suggestions. To clarify the key points of discussion and our design choices, we provide the following responses. **Why didn’t the authors compare the proposed approach with diversity-promoting methods like DIAYN or diversity-centered MARL approaches?** - **MARL:** We want to underline an important difference between the typical MARL setting and ours. In MARL, multiple agents are acting in the *same* environment. They need coordination because their trajectories are dependent. In our setting, multiple agents interact with *independent* copies of the environments. We show that coordination is useful in specializing the objective of each single agent, but their trajectories are independent. This makes the two settings significantly different. - **Celebrating Diversity in Shared Multi-Agent Reinforcement Learning:** While the paper introduces policy diversification via mutual information maximization, it remains within the MARL framework, which we aim to differentiate from. Since the agents share policy parameters in a non-reward-free setting, we did not initially consider it. The key distinction is that our agents are fully independent, driven solely by the entropy of the mixture state distribution. We appreciate the reviewer’s suggestion and will cite the paper in the final version to clarify our position. - **DIAYN:** We note that the intended use of DIAYN is substantially different than our setting. In the original paper, the authors use DIAYN to learn diverse *skills*, which are specialized policies or options that may be combined in the same environment to achieve complex goals. Here, we want to learn diverse policies to collect maximum entropy data across parallel simulators. However, we agree with the reviewer that DIAYN could be adapted to our setting, making the comparison potentially interesting. We will include a comparison with DIAYN in a revised version of the manuscript. **How would the method perform in more complex environments beyond gridworlds? Have the authors considered extending the approach to continuous state and action spaces?** We fully agree on the importance of extending our approach to continuous state and action spaces, which we see as a natural direction for future work. Our PGPSE algorithm can be adapted to these settings by computing trajectory entropy using non-parametric estimators, as explored in works like *Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate* and *Behavior From the Void: Unsupervised Active Pre-Training*. This work serves as a foundational step in advancing parallel learning, focusing on defining the setting and establishing the theoretical basis for replica learners.
null
null
null
null
null
null
Embedding Safety into RL: A New Take on Trust Region Methods
Accept (poster)
Summary: This paper considers the problem of constrained MDPs. The key idea is to modify the geometry of the policy space to ensure that trust regions contain only safe policy space. This is achieved by introducing a new family of policy divergence that incorporate certain mirror functions. The author provide theoretical guarantees on the convergence of CNPG and theoretical properties of the C-TRPO updates. In addtion, the author also present empirical evaluations across 8 different tasks to show that C-TRPO reduces constraint violations while maintaining competitive returns. Claims And Evidence: Yes. The theoretical properties (convergence and optimality) are well-supported by the proofs in section 4 and appendix C. The empirical results demonstrate reduced contraint violations. The comparison is comprehensive and fair. Methods And Evaluation Criteria: Yes, the proposed methods C-TRPO makes sense as it provides a theoretically sound way to incorporate safety constraints into the policy geometry. The evaluation framework comprehensively assesses both theoretical claims and practical performance while maintaining relevance to real-world safety constraints. Theoretical Claims: Yes, I examined several key theoretical proofs in the paper, particularly in Section 4 and Appendix C.I did not find any obvious errors in the proofs, though some proofs in the appendix (particularly for technical lemmas) could benefit from more detailed explanations of intermediate steps. Experimental Designs Or Analyses: 8 tasks from safety gymnasium benchmark are included in the experiment for empirical analysis. Overall, the experimental design is sound. Supplementary Material: Yes. I checked the proofs in appendix C and additional result in appendix D where the authors present additional analysis to support the claims. Relation To Broader Scientific Literature: This paper synthesizes ideas from multiple areas, applying concepts from optimization and control theory to address practical challenges in safe reinforcement learning. It represents a step forward in making RL more applicable to safety-critical real-world scenarios. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. the idea of combines trust region with barrier functions is novel, and it provides better solution for solving constrained MDPs. 2. This paper provides strong theoretical guarantees and clean math framework connecting trust regions and safety constraints. Weaknesses: 1. The paper has limited evaluation on larger-scale problems. Other Comments Or Suggestions: No Questions For Authors: 1. How sensitive is C-TRPO to the accuracy of the estimated cost function? Could you provide an analysis or experiments showing how performance degrades with increasingly noisy or misspecified constraints? 2. How does C-TRPO handle multiple, potentially conflicting constraints? Could you provide theoretical insights or empirical results for scenarios with multiple safety criteria? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > How sensitive is C-TRPO to the accuracy of the estimated cost function? Could you provide an analysis or experiments showing how performance degrades with increasingly noisy or misspecified constraints? This is an excellent question. While we do not yet have a comprehensive theoretical analysis, we are running experiments to assess C-TRPO’s robustness to noisy or misspecified cost function estimates and will report results in the updated appendix. > How does C-TRPO handle multiple, potentially conflicting constraints? Could you provide theoretical insights or empirical results for scenarios with multiple safety criteria? C-TRPO naturally extends to multiple constraints. The benchmark environments we consider already include multiple cost signals, which are aggregated into a single constrained objective. While our current experiments adhere to this setup, it is possible to modify the benchmark tasks to enforce separate constraints on individual cost functions. We will expand our discussion on handling multiple constraints in the final version of the paper. Theorem 4.5 already implies that the optimal constrained policy $\pi^*_{safe}$ found by C-NPG satisfies as few constraints with equality as required to be optimal, which is discussed in the respective paragraph. We expect similar results to hold for C-TRPO as well. Additionally, we recognize the importance of empirical evaluation in scenarios with distinct, potentially conflicting constraints, and we consider this an important direction for future work.
Summary: In this paper, authors present idea of solving Constrained Markov Decision Processes (CMDPs) using trust regions that obey the constraints strictly and allow for return maximization. Earlier approaches work with KL-divergence based trust regions and try to recover the policy if the constraints of CMDP are violated using hysteresis. The paper works in the state distribution space, where KL divergence is expressed as Bregman divergence and the constraints of the MDP are converted to barrier functions that make the divergence go infinitely high for violating the constraint. This creates a "safe" trust region under which a policy would be optimized using natural policy gradient-style updates. If due to empirical approximations, the constraint gets violated, the hysteresis approach would be used to bring policy back into safe region. The paper presents important theoretical results for designing safe trust region, guarantees for reaching the optimal policy for the given CMDP and safety guarantees during training. The methods proposed are compared against other baselines in safe RL literature on safety gym environments. The results demonstrate that the proposed approach provides high returns while keeping the violations minimal. Claims And Evidence: The contributions claimed in the work are supported with theoretical and empirical evidence. Methods And Evaluation Criteria: - The algorithm proposed in the paper is closely related to Trust Region Policy Optimization (TRPO) [1] and Constrained Policy Optimization (CPO) [2]. The components added to TRPO and CPO are justified properly and make sense. - The paper uses safety gymnasium [3] for their experimentation. The choice of environments is apt. As for metrics, expected returns and constraint violations are compared across multiple algorithms. [1] Schulman, J., Levine, S., Moritz, P., Jordan, M. I., and Abbeel, P. Trust region policy optimization, 2017 [2] Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization, 2017. [3] Ji, J., Zhang, B., Zhou, J., Pan, X., Huang, W., Sun, R., Geng, Y., Zhong, Y., Dai, J., and Yang, Y. Safety gymnasium: A unified safe reinforcement learning benchmark. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023 Theoretical Claims: I have not checked the proofs of propositions 4.1, 4.2 and 4.3 and the proofs of theorems 4.4 and 4.5. Experimental Designs Or Analyses: - I had a problem understanding the results depicted in figure 3. Why are the results aggregated together across multiple tasks? How can we compare the average of normalized rewards and constraint violations across 8 tasks together? Supplementary Material: No. Relation To Broader Scientific Literature: Unlike earlier methods, the paper presents a way to design a safe trust region and provides a practical algorithm to optimize for returns within this trust region. The barrier functions over state distribution is a clever way of inducing such a safe trust region. The paper provides C-NPG, a natural policy gradient variation with safe trust region, and C-TRPO, TRPO under safe trust region, algorithms which naturally inherit the methodology from earlier prominent works with CPO's hysteresis idea that allows for safe policy updates. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strength of the paper is in its theoretical rigor and innovation behind design of safe trust region. The paper takes more principled route of adhering to the safe trust region while updating policies. Weakness: I don’t see any major weakness of this work, although I have not checked the proofs in this work which can have mistakes. Other Comments Or Suggestions: I had a few suggestions regarding writing: - On line 233 (left column) the paper starts talking about divergence $D_C$ which is said to be defined below and there is no definition until it has been used multiple times before it is formally defined in equations 16 and 17. - Also, algorithm 1 can be moved after the description of entire methodology that way it avoids unnecessary introduction of variables and steps that are not talked about in the text until algorithm 1 starts in the text. Questions For Authors: Question regarding Theorem 4.4, Isn't CPO also safety invariant? It would be sufficient to show the CPO is invariant and then, C-NPG with any choice of beta would be invariant too. I feel the property that needs to be more emphasized in the text is the conservativeness of the policy update under higher beta values w.r.t. CPO. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed comments and suggestions! We respond to each of your points in detail below. > I had a problem understanding the results depicted in figure 3. Why are the results aggregated together across multiple tasks? How can we compare the average of normalized rewards and constraint violations across 8 tasks together? We follow a standard procedure from unconstrained RL to provide a concise summary of results, adopting the methodology from [rliable](https://github.com/google-research/rliable): - Each algorithm is trained across multiple seeds per environment. - Performance is normalized per environment relative to a reference algorithm. - Normalized results are then aggregated across seeds and tasks to enable a high-level comparison. To ensure robustness, we use interquartile mean aggregation and report bootstrapped confidence intervals, following best practices in RL evaluation. While this approach provides an informative summary, we acknowledge that individual task-level variations are important, which is why we also include per-task sample efficiency curves in the appendix. > The barrier function over state distributions is a clever way of inducing such a safe trust region. Thank you for the positive assessment! > C-TRPO inherits methodology from CPO’s hysteresis idea that allows for safe policy updates. We would like to clarify that hysteresis was introduced in our work, not in the original CPO paper. However, our results show that applying hysteresis to CPO improves its performance, as reported in our ablation studies. To achieve the full performance, C-TRPO’s safe trust region is necessary. > On line 233 (left column), the paper discusses divergence before it is formally defined (Equations 16 and 17). Thank you for this suggestion! There are two key divergences: - The theoretical divergence $D_C$ (Equation 10). - The approximated divergence $\bar{D}_C$ (Equation 16). We will clarify this distinction earlier in the manuscript. > Algorithm 1 should be moved after the full methodology description to improve readability. We agree and will restructure the manuscript accordingly. > Question regarding Theorem 4.4, Isn't CPO also safety invariant? It would be sufficient to show the CPO is invariant and then, C-NPG with any choice of beta would be invariant too. Neither CPO nor C-TRPO are strictly safety-invariant as defined in Theorem 4.4 due to approximation and estimation errors. However, Proposition B.2 shows that in the limit of small step sizes, C-TRPO converges to the safety-invariant C-NPG. In contrast, CPO does not follow a natural gradient update, making its small-stepsize limit unclear. While a bound similar to Proposition 4.3 exists for CPO, it is less conservative than C-TRPO’s. Importantly, both methods provide only a bounded approximation of the ideal safety invariance property of C-NPG, even with perfect value function knowledge. When value functions are estimated from finite samples, these bounds are further affected. In practice, C-TRPO offers a more conservative safety guarantee with minimal computational overhead or performance loss. Since estimation errors also impact safety, we plan to explore the finite-sample safety properties of both methods in future work. > I feel the property that needs to be more emphasized in the text is the conservativeness of the policy update under higher beta values w.r.t. CPO. Thank you for this suggestion. We will highlight this aspect in the manuscript. --- Rebuttal Comment 1.1: Comment: In lieu of authors addressing most of my concerns and other reviewers having checked the theoretical correctness, I am increasing my score to 4 (accept).
Summary: The paper introduces Constrained Trust Region Policy Optimization, a new Constrained RL algorithm based Trust Region Policy methods. The trust region is made to only contain safe policies for the update step. The new algorithm enjoys good theoretical properties, with improvement and safety guarantees similar to Constrained Policy Optimization while having provable safety and convergence properties with some additional assumptions. Because of constraining the trust region to safe policies only, the algorithm is designed incur less constraint violations during training than the popular lagrangian approaches. The algorithm is benchmarked on different experiments, where it is shown to be competitive with state-of-the-art algorithms like CPO or CUP. Claims And Evidence: The practical claims are that the algorithm is competitive with state-of-the-art algorithms, with a focus on the number of safety violations during training while still providing good optimal reward. The claim is supported by the experiments where the algorithm is benchmarked against other state-of-the-art algorithms in Safety Gym environments. Methods And Evaluation Criteria: The method for evaluating the performance of the algorithm is benchmarking the algorithm on Safety Gym environments, and the criteria for the evaluation of the algorithm is the number of violations during training, and the cost and average reward at the end of training. Theoretical Claims: The theoretical claims are mainly Proposition 4.1 (reward improvement) and Proposition 4.3 (worst-case constraint violation), Theorem 4.4 (safety during training) and Theorem 4.5 (convergence). Experimental Designs Or Analyses: The experimental designs are evaluating the influence of the hyperparameters, comparing C-TRPO to CPO with hysteresis, and more generally evaluating C-TRPO versus other state-of-the-art algorithms on Safety Gym environments Supplementary Material: The supplementary material includes an extended background, a geometric analysis for the safe trust region newly designed, implementation details for the continuous MDP case, proofs of theoretical claims and experiments. Relation To Broader Scientific Literature: C-TRPO fits into a growing body of primal and primal-dual methods for solving constrained MDPs, and is a primal approach closely tied to CPO and TRPO. To the best of my knowledge, the approach in the paper is new and different from e.g. PCPO. Essential References Not Discussed: I do not see any essential references that are not discussed in the paper. Other Strengths And Weaknesses: The paper's strength is designing a new approach that has very good theoretical properties that are shown (maybe a regret analysis would strengthen the paper even more). The experiments are also quite convincing: even though the improvement in practice is not extreme, it is still significant. The paper is well-written. Other Comments Or Suggestions: The paper provides good background and intuition, and the subtleties of the links with NPG are well-explained. Questions For Authors: I have no questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work! We agree that a regret analysis would further strengthen the theoretical contribution and see this as a valuable direction for follow-up work.
Summary: The paper proposes C-TRPO and C-NPG for solving CMDPs. Mirror functions are used to define policy divergences that are finite only for safe policies. This divergence metric is then used to reshape the policy shape geometry to ensure that trust regions contain only safe policies. The algorithms are analysed theoretically, and it is empirically shown that they lead to fewer constraint violations while obtaining similar rewards. Claims And Evidence: The claims in the paper are clear and convincing. Methods And Evaluation Criteria: The experiments are thorough and the benchmark considered is widely used for safe RL methods. I wonder why log-barrier-based approaches such as (1 or 2) are not used as baselines. Furthermore, I would have also expected Saute RL (3) to be a baseline. 1. https://www.jmlr.org/papers/volume25/22-0878/22-0878.pdf 2. https://arxiv.org/abs/2410.09486 3. https://arxiv.org/pdf/2202.06558 Theoretical Claims: I think the theoretical claims are correct. Experimental Designs Or Analyses: The experiments are evaluated on a well-established safe RL benchmark and are sound, in my opinion. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The problem addressed is important and relevant. The insights drawn for trust region methods are also broadly important. The only thing I am unsure about is that trust region methods are generally very sample-inefficient and, therefore, rarely applied for direct learning in the real world. The common strategy is to train the policy in simulation. In this case, what is the benefit of analysing the cost-regret? Effectively, what we care about is that the final policy is safe. I would appreciate it if the authors could discuss this in the paper. Essential References Not Discussed: From my knowledge, I think the paper covers the essential references well. Other Strengths And Weaknesses: **Strengths**: -- The paper tackles an important problem of safe RL using widely applied trust region approaches. -- The paper is well written and easy to follow. -- The experiments are thorough. **Weaknesses**: -- Log barrier baselines are not considered in the experiment. -- I am not sure about analysing cost-regret with trust region methods given that they are rarely applied for learning in the real world due to sample inefficiency. Other Comments Or Suggestions: No additional comments. Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We have addressed your specific comments in detail below. > Log barrier baselines are not considered in the experiment. We considered **IPO** as a practical log-barrier baseline, along with **P3O**, a proximal adaptation. Regarding the specific works mentioned: - LB-SGD (Usmanova et al.) is discussed in our related work section. However, our focus is on practical deep RL algorithms, whereas LB-SGD primarily offers theoretical insights. Given the lack of large-scale empirical evaluations and implementation details for deep function approximation in LB-SGD, we did not include it as a baseline in our experiments. - Saute RL presents an interesting comparison but tackles a subtly different problem formulation than standard CMDPs. As seen in Definition 4 of their work, Saute RL imposes stricter constraints by disallowing stochastic policies from occasionally violating constraints and assuming non-negative safety costs. While this formulation is reasonable for safety, it does not always correspond to a general CMDP. A key strength of C-TRPO is its reliability in solving standard CMDPs, making it applicable to broader tasks like diverse policy optimization (e.g., [1]). A rigorous theoretical and empirical comparison would require additional work beyond the current scope. However, we will mention Saute RL in the manuscript and highlight its comparison with C-TRPO as an important direction for future research. - ActSafe is a newly accepted work that we were not previously aware of. Since it falls under model-based safe RL, which we briefly discuss in our related work section, we will add a reference in the final manuscript. However, we want to highlight key differences from our approach. Specifically, in Definition 4.7, ActSafe indeed defines a distance measure between policies, which is informed by cost continuity. In contrast, our Bregman divergence explicitly captures differences in the cost values of the policies, representing a key conceptual distinction in how safety is enforced. [1] Zahavy, Tom, et al. "Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality." The Eleventh International Conference on Learning Representations. > I am not sure about analysing cost-regret with trust region methods given that they are rarely applied for learning in the real world due to sample inefficiency This is an important point. While near on-policy approaches like TRPO tend to be sample inefficient, they remain valuable due to their stability and well-established theoretical foundations in the function approximation setting. While they may not be ideal for online real-world learning, understanding how to minimize cumulative constraint violation regret is essential, as any algorithm used for fine-tuning on a real system must consider it. We believe that the insights from C-TRPO’s regret analysis will contribute to the development of future algorithms that are both sample-efficient and safety-aware.
null
null
null
null
null
null
Raptor: Scalable Train-Free Embeddings for 3D Medical Volumes Leveraging Pretrained 2D Foundation Models
Accept (spotlight poster)
Summary: The paper introduces a novel design that enables the application of 2D foundation models to 3D data tasks without requiring additional pretraining. The proposed method significantly reduces the size of feature maps, and demonstrates strong performance even in scenarios with limited training samples. Additionally, the authors provide detailed theoretical descriptions and ablation studies to support their design choices. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: N/A Relation To Broader Scientific Literature: Yes Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The method provide a new direction for 3D medical image classification in a training-free manner. Weakness: 1. The DINO series models do not be trained on the medical images. Why they can perform well? The authors should give more analysis and explanation. 2. Spatial dimensionality reduction method: The authors employ random projection for feature dimensionality reduction. However, in Section 3.3, they perform an averaging operation along the observing direction on the intermediate representation z, reducing the corresponding spatial dimension. Why random projection or other dimensionality reduction techniques are not used for this step as well? 3. Computational efficiency comparison: The proposed method requires processing slices from three directions using a 2D foundation model. The authors only report inference time for the proposed method, but it would be more informative to compare inference speeds with other 3D methods. 4. Reliability of random projections: The authors evaluate the reliability of random projections using three random seeds. However, this may not be sufficient. A larger number of random seeds should be tested, and error bars should be included to better illustrate the impact of randomness on performance. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We also agree with the reviewer that our novel approach has several implications for the field, as it would make the computation of embeddings for volumes much faster while maintaining the ability to perform downstream tasks. We also appreciate the reviewer’s evaluation of the theoretical background of our approach. > **Why DINO works for medical data:** The high-level intuition is that the semantically meaningful features of medical volumes are still part of the universal image features (edges, textures, shapes), which a broad 2D encoder has already learned from large‐scale, diverse data. RAPTOR uses generic image foundation models like DINO to capture any meaningful image features and performs low rank approximations to retain only the relevant ones. This is in contrast to many existing medical image/volume models that try to learn such mappings from scratch or fine-tune a general model to be biased towards them. To illustrate, we projected both generic (ImageNet) and medical (MedMNIST) embeddings onto the principal components derived from the generic dataset. In the table below, we calculate the total variance explained as we add more PC’s. Although the medical data do not align with the very top principal directions, they catch up as more components are added (eventually matching the generic variance). This demonstrates that medical image features lie within the general “image” space. | #PCs | General Explained Var | Medical Explained Var | ratio(Medical/General) | |-|-|-|-| | 100 | 0.613 | 0.225 | 0.367 | | 200 | 0.765 | 0.397 | 0.518 | | 500 | 0.926 | 0.734 | 0.793 | | 1000 | 0.999 | 0.994 | 0.996 | > **Why we don’t use another method to aggregate slices:** We explored the simplest approach—summing and random projection—because it is computationally lightweight and maintains near‐orthogonal signals from the 2D tokens without additional training (also note that averaging/summing is identical in expectation to random projection into dimension 1). More structured dimension‐reduction techniques may introduce significantly higher computational or memory costs, especially for high‐resolution volumes. We agree that specialized subspace methods are an interesting direction for future work, particularly if they exploit known volumetric structures. As an example, please see our response to reviewer zwUQ on partitioned embedding. > **Time taken to run Raptor:** We appreciate the reviewer’s interest in a more detailed efficiency comparison. In the table below, we provide approximate times for fine-tuning, inference, and one-time embedding extraction across Raptor, SuPreM, and Merlin when training on the Organ dataset (971 volumes). While Raptor does incur a one-time cost to produce embeddings, its subsequent fine-tuning is minimal (logistic regression or a small MLP) and can often run on CPU. This contrasts with end-to-end 3D methods (SuPreM, Merlin) which must either be entirely trained or extensively fine-tuned, typically on a GPU with longer runtimes. | Method | Medical Pre-training | GPU | Embedding (per vol) | Fine-tuning | Inference (per vol) | |-|-|-|-|-|-| | Raptor | None (use Image FM) | RTX2080Ti | $4.2$ secs | $1.3$ mins | $<0.1$ secs | | SuPreM | Days | A100 | - | $70.0$ mins | $<0.1$ secs | | Merlin | Days | A100 | - | $63.0$ mins | $<0.1$ secs | > **Stability of random projections:** We also tested more seeds (up to 10) for the 3D MedMNIST dataset to assess the stability of random projection. In the table below, we observe consistently low standard deviations, suggesting that random projections remain highly reliable in practice: | Method | K | Seed 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Avg. | Std. | |--------------|-----|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------|--------| | **Raptor** | 1 | 0.818 | 0.817 | 0.793 | 0.823 | 0.831 | 0.818 | 0.831 | 0.814 | 0.801 | 0.831 | 0.818 | 0.0121 | | | 5 | 0.890 | 0.860 | 0.864 | 0.869 | 0.869 | 0.877 | 0.868 | 0.858 | 0.872 | 0.866 | 0.869 | 0.0086 | | | 10 | 0.866 | 0.896 | 0.876 | 0.875 | 0.882 | 0.879 | 0.871 | 0.876 | 0.877 | 0.886 | 0.878 | 0.0078 | | | 100 | 0.901 | 0.899 | 0.900 | 0.898 | 0.905 | 0.891 | 0.899 | 0.898 | 0.900 | 0.898 | 0.899 | 0.0034 | | | 150 | 0.898 | 0.897 | 0.897 | 0.894 | 0.898 | 0.903 | 0.902 | 0.905 | 0.902 | 0.904 | 0.900 | 0.0034 | We hope that the additional analysis provides better insight into the capabilities of our method. If these clarifications have addressed any lingering concerns, we would sincerely appreciate if the reviewer would consider raising their score.
Summary: This paper introduces Raptor (Random Planar Tensor Reduction), a train-free method for generating semantically rich embeddings for volumetric data. Raptor leverages a frozen 2D foundation model, pretrained on natural images, to extract visual tokens from individual cross-sections of medical volumes. These tokens are then spatially compressed using random projections, significantly reducing computational complexity while retaining rich semantic information. Extensive experiments are carried out on multiple medical image datasets, and the experimental results are analyzed in detail, and the experimental results are excellent. The superiority and efficiency of the proposed method are verified. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: Appendix A.3 restates a version of the Johnson-Lindenstrauss (JL) lemma (Dasgupta & Gupta, 2003) and demonstrates its applicability in the proposed multi-view volumetric framework. No issues were identified in the proof or its adaptation to this setting. Appendix A.4 provides the mathematical formulation of Raptor and proves that the model’s effectiveness is guaranteed under the assumption that the slices are "smooth". No logical gaps or technical flaws were detected in the derivations. Experimental Designs Or Analyses: The comparative experiments in Section 4.2 (Main Results: Classification) and Section 4.3 (Main Results: Regression) were thoroughly reviewed. For classification tasks, the comparisons against baselines include the metrics AUC and Accuracy (ACC), while regression tasks utilize the R² score. Additionally, the evaluation of varying training data sizes on the Synapse dataset in Section 4.4 and the impact of different embedding sizes across methods in Section 4.5 were examined. The overall design of these comparative experiments is methodologically sound, with clear and logically structured comparisons. Regarding ablation studies, the sensitivity analysis to the number of random projections K in Section 5.1, the influence of viewpoint numbers in Section 5.2, and the exploration of captured feature ranges in Section 5.3 were assessed. The ablation settings are well-defined and systematically address critical components of the proposed method. However, two aspects could be further clarified: 1. Statistical Validation in Regression Analysis: While the r² scores in Section 4.3 provide useful insights, it would be valuable to supplement these results with statistical tests (e.g. p-values) to enhance the interpretability of performance differences, especially in small-sample settings. 2. Using MNIST numbers with different shapes (e.g., “1” vs. “8”) may introduce bias. To better isolate the impact of scale variation, the authors might consider controlling for shape differences (e.g., using a single digit class like “0”) or exploring class-specific error patterns (e.g., through confusion matrices). Supplementary Material: I have reviewed Appendices A.1 to A.7, which comprehensively supplement the main text by covering the mathematical formulations of the model, dataset characteristics, compression methods, baseline model selection, and other critical technical details. The content is thorough and well-organized, providing necessary depth to support the methodology and claims in the paper. No significant gaps were identified. Relation To Broader Scientific Literature: The authors position their work within the context of image foundation models (specifically DINOv2-L, Oquab et al. 2023). Their key innovation—compressing tokens inferred from frozen image foundation models via random projections on orthogonal cross-sections of volumetric data—addresses two limitations of prior work: high computational costs (Hatamizadeh et al., 2021; Wasserthal et al., 2023; Li et al., 2024; Cox et al., 2024; Wu et al., 2024b) and the limited scale of 3D medical datasets (160K volumes, Wu et al., 2024b), which remain orders of magnitude smaller than 2D image datasets (1.2B images, Oquab et al., 2023). This study is the first to integrate scalable 2D foundation model priors with computationally efficient 3D medical analysis, offering a pathway to leverage large-scale 2D pretraining for volumetric tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: strength: The training-free design of this work significantly reduces model construction costs, while achieving robust performance on an 11GB RTX 2080 Ti. Weakness: Although the authors claim that they demonstrate their approaches in 10 tasks, it only contains two simple tasks, classification and regression. I would suggest the authors to demonstrate the proposed method on more fundamental and challenging tasks (like segmentation). Other Comments Or Suggestions: In Figure 1, if the metrics for regression tasks (r²↑, indicated with +) represent mean values, this should be explicitly stated to avoid potential misinterpretation. Questions For Authors: 1) Could the feature extraction approach that processes orthogonal cross-sections independently potentially compromise inter-slice contextual features in 3D medical volumes? 2) The application of Raptor’s features to classification and regression downstream tasks could be further clarified. To enhance methodological transparency, it would be beneficial to explicitly illustrate this process in a workflow diagram, demonstrating how the extracted features are integrated into task-specific heads or decision pipelines. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's keen observations and suggestions. Thanks to their recommendations, we were able to further uncover Raptor's capability in a challenging task (segmentation) and are excited for future directions. > **statistical tests (e.g. p-values) … especially for small sample settings** We acknowledge the value of statistical tests for clarifying performance differences and intend to include in a future revision. Given the strong performance of existing methods, we did not observe any significant improvement over the next-best method under the bonferroni-corrected threshold (p < 0.05/19, reflecting 19 total datasets). However, we did observe some modestly significant improvements (p < 0.05). Among the regression datasets, we observed such improvements in four of the datasets (4/10). We did not observe any such improvements for the classification datasets. In terms of the subset experiments (varying training set size from 10~500), we similarly found modestly significant improvements (p < 0.05) in 3/5 settings for the UKBB white matter dataset, 1/5 in CCCC-II, and 2/5 in Synapse. > **Is there any bias per digit in the simulations?** We appreciate the suggestion to control for digit shape in our simulations. Accordingly, we fixed the digit to either 0, 1, or 8 and generated simulations for each case for benchmark. We observed that the choice of digit can slightly influence the results. Here we demonstrate these results with 300 generated training samples each, as the simulation and evaluation process is time consuming end-to-end. | Method | Size | 0 | 1 | 8 | Avg. | Std. | |--------------|-------|-------|-------|-------|--------|--------| | **Raptor** | 64px | 0.798 | 0.678 | 0.753 | 0.743 | 0.049 | | | 32px | 0.580 | 0.559 | 0.575 | 0.571 | 0.009 | | | 16px | 0.552 | 0.514 | 0.517 | 0.527 | 0.017 | | | 8px | 0.506 | 0.509 | 0.508 | 0.508 | 0.001 | > **Segmentation** We concur with the reviewer’s point that more downstream tasks need to be attempted. We have now conducted additional experiments on the Medical Segmentation Decathlon (https://www.nature.com/articles/s41467-022-30695-9), focusing on four tasks: hippocampus, spleen, colon, and hepatic vessel segmentation. Each task presents unique challenges (e.g., limited contrast for hippocampus, class imbalance for hepatic vessels). For Raptor, we trained a 2-layer convolutional head to learn a transformation of the embeddings to the segmentation. | Task | Dataset Size (Train/Val/Test) | Model | IoU | Dice Score | |-|-|-|-|-| | Hippocampus | 182 / 39 / 39 | Raptor | 0.607 | 0.719| | | | MedSAM | 0.575 | 0.615| | Spleen | 28 / 6 / 7 | Raptor | 0.592 | 0.657| | | | MedSAM | 0.960 | 0.979 | | Colon | 88 / 18 / 20 | Raptor | 0.597 | 0.597 | | | | MedSAM | 0.841 | 0.906 | | Hepatic Vessel | 212 / 45 / 46 | Raptor | 0.387 | 0.431 | | | | MedSAM | 0.387 | 0.428 | As shown here, Raptor achieves competitive results compared to MedSAM (a dedicated segmentation model). Notably, on the Hippocampus dataset, Raptor surpasses MedSAM’s Dice (0.719 vs. 0.615) and IoU (0.607 vs. 0.575), and performs similarly for hepatic vessels. MedSAM, however, excels on spleen and colon, which have fewer training samples. Although our primary goal was to obtain general-purpose embeddings, these early results suggest Raptor can also serve as a reasonable foundation for volumetric segmentation, without requiring large‐scale 3D training. > **Figure 1 clarity** We agree Figure 1 could be improved in clarity and will make the requested adjustment given the opportunity. > **Would orthogonal feature processing compromise inter-slice contextual features** We appreciate this important point raised by the reviewer. We provide an intuition how the resulting Raptor embedding can still retain volumetric signals. For a raptor embedding $e$, a tensor of dimensions $3\times100\times16\times16$, the coordinate $(x,y,z)$ in the volume can be represented by three slices—one per axis—yielding tokens indexed by, e.g., $e[0,:,x,y]$, $e[1,:,y,z]$, and $e[2,:,x,z]$. Each patch within these slices is mapped into the K-dimensional embedding space; subsequent MLP layers can then fuse these partial 2D perspectives into a more coherent 3D representation. Our quantitative results on datasets like UKBB suggest that Raptor’s multi-view scheme does effectively capture enough high-level volumetric information for robust performance. > **Clearer workflow description on using Raptor embeddings** This is an important suggestion and we plan on including a diagram and a detailed description of the Raptor pipeline for downstream tasks. We thank the reviewer for the insightful questions, and hope our response addresses any remaining concerns. If so, we would greatly appreciate it if the reviewer could reflect these clarifications by raising their score of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ additional experiments on segmentation tasks and their responses to other questions. However, my primary concern regarding the integration of extracted features into task-specific heads or decision pipelines remains unresolved. After careful consideration, I decided to maintain my original score of 3 (Weak Accept). --- Reply to Comment 1.1.1: Comment: We acknowledge that the details of how Raptor embeddings were used for the benchmarks were lacking in our previous response; this was due to the word limit, and we would like to explicitly discuss them here. > **Workflow description on using Raptor embeddings** First, we obtain Raptor embeddings for all volumes. We then follow the steps below to make predictions for each task: 1. For **classification tasks**, we performed logistic regression with an L2 penalty. We constructed feature matrices of dimensionality [sample size] x [embedding size] and prediction targets of dimensionality [sample size] x [number of classes]. Weights of 0.01, 0.1, 1.0, 10.0, and 100.0 for the L2 penalty were evaluated. The optimal weight was chosen based on the validation split of each dataset. We used scikit-learn’s LogisticRegression module. 2. For **regression tasks**, we fit a 3-layer MLP with input of dimensionality [embedding size] and predicted quantitative measures belonging to each brain region. The MLP (implemented with pytorch) had hidden layers of size 256 and we used the MSE loss. In order to prevent overfitting, we checkpointed the model weights only when validation loss improved. 3. For **segmentation tasks**, we refer to our detailed discussion with reviewer zwUQ. We appreciate the insightful questions raised by the reviewer, and hope that the additional information we have provided further supports the potential of our method. If there are any additional questions or concerns regarding our set up, we would be more than happy to answer them in detail; otherwise, we humbly ask the reviewer to reconsider their scoring of our method.
Summary: This paper proposed a framework to leverage the pretrained large 2D encoder for 3D medical image analysis (i.e., classification and regression). By applying random projection to feature embeddings encoded from 2D slices taken of three orientations (i.e., sagittal, coronal, and axial) using DINOv2-L and concatenating an MLP, it provides a fine-tuning-free way to leverage 2D encoder pretiraned on 2D natural images. Compared with several encoders pretrained on large-scale medical image datasets, the Raptor outperformed them in both classification and regression tasks. Claims And Evidence: In four claimed contributions (lines 90-102), the second and forth claims are well-supported. For the first claim about data efficiency, it was only supported by a single example in Figure 4. It would be more convincing if the data efficiency of the model is further evaluated on one of CC-CCII, CTRG-C, CTRG-B in Table 3, and one of the regression tasks in Table 4. I found the third claim regarding scalability a bit confusing and wish the authors could clarify a bit during the rebuttal. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: I checked for correctness. I have a question about the sampling of random matrix for projection. In lines 697-698, the random matrix for random projection was sampled from $\mathcal{N}(0,1)$; I wonder why not sample from $\mathcal{N}(0,1/k)$, as the default option in [sklearn package](https://scikit-learn.org/stable/modules/generated/sklearn.random_projection.GaussianRandomProjection.html#sklearn.random_projection.GaussianRandomProjection). Experimental Designs Or Analyses: I think the overall experimental designs and analyses are sound and valid. I wonder why the encoder of other multimodal medical foundation models except Merlin, such as Llava-med [1] and Med-flamingo[2], is not used for comparison. 1. Li, Chunyuan, et al. "Llava-med: Training a large language-and-vision assistant for biomedicine in one day." Advances in Neural Information Processing Systems 36 (2023): 28541-28564. 2. Moor, Michael, et al. "Med-flamingo: a multimodal medical few-shot learner." Machine Learning for Health (ML4H). PMLR, 2023. Supplementary Material: Yes, I've gone through the supplementary material. Relation To Broader Scientific Literature: There are many existing studies exploring training foundation models/encoders for medical images from scratch or fine-tuning existing models/encoders. This paper proposes a novel method to leverage a pre-trained 2D large encoder for volumetric medical image analysis without fine-tuning, which is a considerable development compared to previous studies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The presentation is good, but some parts can be further improved. Please refer to Other Comments Or Suggestions and Questions For Authors sections. Other Comments Or Suggestions: 1. Please consider supplying the parameter number for the encoder and embedding dimension in Table 1, which will help the reader better understand the memory footprint in Figure 5. 2. Please consider plotting the upper-bound results in Figure 4 (i.e., training on the full synapse dataset). Questions For Authors: 1. Lines 822-832 gave a reasonable explanation of why Raptor has worse performance on 3D FractureMNIST. From Figure 8, it seems most of the negative $\alpha_i$ exists in axial only. I wonder, if you run Raptor on Coronal and Sagittal views (combined or separately, similar to experiments in Table 6), would that improve the performance of Raptor in the Fracture dataset? 2. Is there any potential explanation for why MAE performed so badly in the regression task in Tabel 4? 3. It seems the setting of Raptor would generally work for global-level image understanding (e.g., classification, Visual Question Answering, captioning) and not work for dense-prediction tasks (e.g., segmentation). I wonder if the authors have explored the use of randomly projected embedding for VQA and captioning. 4. I'm not so sure if I correctly understand the term 'scalable' in the third claimed contribution (lines 96-98). Can you please elaborate on that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed assessment of our manuscript. We appreciate that they viewed our experiments as supporting the utility of our method, and we agree that we could support some of our claims with additional experiments. We address each of the points raised below. > **Meaning of scalable in our work:** We use the term “scalable” to highlight some key aspects of Raptor: 1. It can process high-resolution 3D medical volumes with only a single pass through a fixed 2D encoder, avoiding the heavy computational cost of purely 3D models. 2. The framework can readily scale to larger or more diverse medical datasets and reuse the embeddings for multiple downstream tasks, as there is no training involved. 3. The inference step can be easily parallelized since we only require a frozen foundational model and random projections. > **More evidence for effectiveness with limited training samples:** _Due to the word limit, we have moved the subset results to our response to reviewer 28uY._ > **Why not sample from N(0, 1/k):** In terms of choice of noise for the random projections, we proceeded with the numpy default which was N(0, 1). Quantitatively we don’t expect a notable difference choosing a variance in this range, as the embeddings are eventually standardized in the fine-tuning step. > **Exploration of medical VLMs:** While Llava-Med and Med-Flamingo are strong multimodal methods in their respective domains, their encoders process 2D data, making direct comparison on volumetric (3D) tasks ambiguous (unlike Merlin, which was designed for 3D). Nonetheless, we tried substituting their image encoders into our pipeline in place of DINO: Raptor-LVM and Raptor-CLIP (note that Med-Flamingo leverages OpenAI’s CLIP as the image encoder, which we have previously benchmarked as an alternative to DINO). (AUROC shown) | Methods | Organ | Nodule | Fracture | Adrenal | Vessel | Synapse | |-|-|-|-|-|-|-| | Raptor-CLIP | 0.994 | 0.869 | 0.669 | 0.906 | 0.936 | 0.849 | | Raptor-LVM | 0.996 | 0.888 | 0.632 | 0.904 | 0.941 | 0.851 | | Raptor-B | _0.998_ | 0.904 | 0.647 | **0.930** | 0.945 | _0.922_ | | **Raptor** | **0.999** | **0.929** | 0.677 | _0.926_ | **0.966** | **0.943** | These alternatives show competitive performance but do not surpass DINO (our current proposed approach). > **Parameter number:** We would be happy to include this information in our final version. | Methods | \# Param | Latent| |-|-|-| | SLIViT | 48.4M | $768\times64\times8\times8$ | | SuPreM | 62.1M | $128\times12\times12\times12$ | | Merlin | 124.7M | $2048\times14\times7\times7$ | | MISFM | 46.2M | $100\times16\times16\times16$ | | VoCo | 294.9M | $3072\times3\times3\times3$ | | **Raptor (Ours)** | 304.4M (DINOv2-L) | $3\times100\times16\times16$ (K=100) | > **Plot upper bound in scaling figures:** We appreciate this suggestion, and plan to revise our figures accordingly. > **Can accuracy be improved by removing problematic views:** We appreciate this interesting suggestion. We explored whether skipping a view might improve FractureMNIST performance (either AC, CS, or AS, instead of ACS). However, results show no improvement: AC (0.657), CS (0.654), and AS (0.660), all perform below the default ACS (0.677). Intuitively, removing an entire view discards potentially useful information, even if that view is somewhat degenerate. An alternative solution would be to partition slices within a problematic view. For instance, instead of aggregating all 64 slices at once, we could split them into two groups of 32 slices each and perform Raptor, then concatenate. Although this could mitigate partial cancellation effects, it increases embedding size. We find this to be a promising extension to Raptor in both practical deployment (handle slice-wise misalignment) and theoretical analysis (quantify error bounds under partitioning). We are actively exploring how best to balance these tradeoffs. > **Why MAE performs bad:** The choice of architecture for the MAE was a transformer with 3-dimensional positional encodings. Despite our efforts to tune the model, we suspect that the datasets are simply too small (to learn e.g. effective patch embeddings and relationships between positional encodings). > **VQA Captioning:** Based on the strong performances that we observed (+ segmentation, in response to reviewer BpWD), we do hypothesize that Raptor embeddings would be capable of captioning. In this work, we focused on thoroughly verifying several properties of Raptor, and hope to evaluate its novel use cases in the future. We appreciate the insightful questions raised by the reviewer, and hope that the additional information we have provided further supports the potential of our method. If our response has sufficiently resolved current questions regarding our work, we would appreciate an increase in our score. --- Rebuttal Comment 1.1: Comment: **Previous Comments removed due to space constraint, please check for revision history** ---**Apr 8 Updates**---: I raised my score to 4, given the rebuttal answered to my initial questions, and I still think Raptor could be a useful innovation for global-level image understanding. Now I understand how the segmentation results are calculated, and although I believe the segmentation is not necessary, it does not have a negative effect on my final rating. I'd like to personally congratulate the authors on this work and this rebuttal. I can see the authors put a lot of effort there. Nice work! I appreciate the authors' continuing discussion on this segmentation topic, which, in my opinion, and I strongly suggest, should **not** be listed as contributions in the final copy. It should not even appear as an exploratory work. Rationale behind: 1. No segmentation model was **ever** evaluated on one slice results. 2. Comparing the middle slice does **not** indicate Raptor embedding is useful for segmentation at scale. Raptor embedding probably captures the middle anatomy feature, so, fortunately, the middle slice segmentation looks decent. But it may completely fail in other locations. 3. Segmentation using Raptor is a **false proposition and is infeasible**. Since each volume only has one Raptor embedding, to derive a segmentation mask for each slice (or each location as a looser constraint), we need a dedicated segmentation header! That means if you train a segmentation head to segment the middle slice, it is unlikely to reconstruct a mask of top slices from the same embedding, and you need to train a new one. Depending on how the anatomy changes in the volume, you may need $N$ segmentation headers and $m \le N \le k$, where m is the number of different anatomical regions (e.g., chest, upper/mid/lower abdomen) and k is the number of total slices. --- Reply to Comment 1.1.1: Comment: **---Apr 7 Updates---:** We thank the reviewer for continuing this dialogue and apologize for any lingering confusion about our segmentation pipeline. We recognize that Section 3.3 described a single, volume-wide embedding, which appears well-suited only to global tasks such as classification or regression. Here, we clarify how we adapt it for slice-level segmentation, explain why we compare only the middle slice, and offer additional context for Raptor’s performance and a new baseline. We agree that Raptor does indeed lose pixel-level information, yet at the same time it retains enough local signal to allow a reasonable segmentation. > **Segmentation setup** **Raptor’s Volume Embedding:** The reviewer’s statement of Section 3.3 is correct: given a volume $\mathbf{x} \in \mathbb{R}^{D\times D\times D}$, we generate a _volume_ embedding $\mathbf{v} \in \mathbb{R}^{3k \times p^2}$, which is used as the input for Raptor’s segmentation task. Despite being “global,” this embedding still retains localizable signals. **MedSAM is 2D-only:** Because MedSAM operates on 2D slices rather than a full 3D volume, we consistently evaluate one slice per volume for both MedSAM and Raptor. We chose the _middle slice_ for simplicity, but one could, in principle, repeat this approach for every slice if a full volumetric segmentation were desired. > **Raptor Segmentation head** We feed the raptor embedding (of a volume) into the segmentation head to segment its middle slice. The target is a tensor of shape $n \times 224 \times 224$, where $n$ is the number of classes. More specifically: 1. Raptor embedding: $\mathbf{v} \in \mathbb{R}^{3k \times p^2} = \mathbb{R}^{300 \times 16 \times 16}$ 2. Upsample$\times 4$ → convolution: $\mathbf{v} \in \mathbb{R}^{128 \times 64 \times 64}$ 3. Upsample$\times 4$ → convolution: $\mathbf{v} \in \mathbb{R}^{n \times 256 \times 256}$ 4. Final resize: $\mathbf{v} \in \mathbb{R}^{n \times 224 \times 224}$ > **Intuition behind Raptor’s performance** While Raptor aggregates the volume into a single embedding, the **three orthogonal orientations** can still “triangulate” local information under certain conditions such as smoothness and alignment (a more formal treatment provided in Appendix A.4). The patches relevant to the middle slice are viewed in the other two axes, providing sufficient context for the 2D convolution head. We conjecture, however, that if there are violations to the Raptor conditions (similar to FractureMNIST), its segmentation performance will deteriorate as well. To further bolster our intuition, we provide additional baselines. We ran the experiment using Raptor with only 1 view of the volume (averaged across the slices). Similarly, we experiment with a 3D ResNet head with a pooling layer (resulting in a 1D bottleneck), which is expected to discard all volumetric information. In both cases, we expect a substantial amount of spatial information to be lost -- yet, we see that it is possible to deduce a segmentation. Of course, we do not recommend these approaches for dense segmentation. We simply wished to demonstrate for reviewer BpWD that, within reason, some segmentation capability is possible even when spatial dimensions are lost. | Task| Dataset Size | Model| IoU | Dice Score | |-|-|-|-|-| | Hippocampus | 182 / 39 / 39| MedSAM| 0.575 | 0.615 | ||| Raptor | 0.607 | 0.719| ||| Raptor (1 view)| 0.528 | 0.593| ||| Resnet 3d | 0.523 | 0.582| | Spleen | 28 / 6 / 7| MedSAM| 0.960 | 0.979 | ||| Raptor | 0.592 | 0.657| ||| Raptor (1 view)| 0.536 | 0.573| ||| Resnet 3d | 0.495 | 0.497| | Colon| 88 / 18 / 20 | MedSAM| 0.841 | 0.906| ||| Raptor| 0.597 | 0.597| ||| Raptor (1 view)|0.500 | 0.502| ||| Resnet 3d | 0.499 | 0.499| | Hepatic Vessel | 212 / 45 / 46| MedSAM| 0.387 | 0.428| ||| Raptor| 0.387 | 0.431| ||| Raptor (1 view)| 0.334 | 0.338| ||| Resnet 3d | 0.331 | 0.332| In summary, **we do not claim that Raptor is optimized for fine-grained tasks**; rather, these preliminary experiments indicate that its aggregated embedding still encodes enough local structure to yield **reasonable segmentation** on certain datasets. We hope this clarifies our segmentation pipeline, and if any points remain unclear, we welcome further questions and will gladly elaborate. **---End of Apr 7 Updates---:**
Summary: This paper presents a random projection-based strategy for generating embeddings from volumetric data. The approach leverages pre-trained 2D foundation models without requiring additional re-training or fine-tuning. The proposed embedding construction method is computationally efficient, and experiments conducted on ten datasets across multiple downstream tasks demonstrate strong performance. Claims And Evidence: 1. Semantically meaningful embeddings for volumetric data can be obtained from 2D foundation models without extra training. The analytical results and empirical experiment support this claim. Methods And Evaluation Criteria: The paper evaluates the proposed method against comprehensive baseline models, including 3D ResNet, ViT, and MAE, as well as pre-trained models such as SLIViT, SuPreM, Merlin, MISFM, and VoCo-L. The approach is validated on both classification and regression tasks, demonstrating its effectiveness across multiple benchmarks. Theoretical Claims: I reviewed the computational complexity and found no issues. However, I did not verify the theoretical analysis provided in the Appendix. Experimental Designs Or Analyses: I reviewed the experiment settings and results and found them to be fair and thorough. The results are sound and effectively support the analysis and conclusions. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper addresses a key challenge in extracting volumetric embeddings for biomedical images. By eliminating the need for re-training, the approach offers efficiency and flexibility. Future improvements could be achieved by leveraging more advanced 2D foundation models from general computer vision fields. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper is well-organized and clearly written, making it easy to follow and understand. The novelty is well-highlighted, and the experimental results effectively support the claims. Other Comments Or Suggestions: In Table 4, reporting the average performance across the 10 regions for each method would improve clarity and readability. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and constructive comments, as well as their positive assessment of our method’s clarity and novelty. As noted by the reviewer, Raptor introduces a paradigm that demonstrates many advantages beyond existing works, and we verify our claims with a wide range of empirical results. > **Average score over 10 brain regions:** For better clarity, we have now added a column for average performance on the regression tasks, reproduced below. Overall, Raptor most accurately predicts physiological measures given brain MRIs, and Raptor-B comes in as the second best (best score in bold, second best italicized). | Methods | WhiteM | GreyM | Cereb | Amyg | Hippo | Cortex | Gyrus | Pall | Caud | Thal | Avg. | |--------------|----------|----------|---------|---------|---------|---------|---------|---------|---------|---------|--------| | $r^2$ | ResNet | 0.417 | 0.562 | 0.193 | 0.072 | 0.108 | 0.125 | 0.099 | 0.055 | 0.162 | 0.134 | 0.193 | | MAE | 0.036 | 0.045 | 0.072 | 0.036 | 0.040 | 0.043 | 0.032 | 0.012 | 0.037 | 0.036 | 0.039 | | MISFM | 0.418 | 0.624 | 0.276 | 0.089 | 0.145 | 0.236 | 0.209 | 0.087 | 0.166 | 0.164 | 0.242 | | SuPreM | _0.646_ | 0.696 | 0.330 | 0.109 | 0.163 | 0.275 | 0.256 | 0.067 | 0.255 | 0.195 | 0.299 | | SLIViT | 0.474 | 0.694 | 0.258 | 0.134 | 0.190 | 0.268 | 0.213 | 0.053 | 0.192 | 0.174 | 0.265 | | VoCo | 0.225 | 0.375 | 0.189 | 0.071 | 0.113 | 0.059 | 0.048 | 0.043 | 0.060 | 0.075 | 0.126 | | Merlin | 0.622 | 0.734 | 0.335 | 0.127 | 0.180 | 0.313 | 0.269 | 0.093 | 0.247 | 0.210 | 0.313 | | Raptor-B | 0.614 | _0.742_ | _0.398_ | **0.185**| _0.247_ | _0.355_ | _0.314_ | _0.116_ | _0.331_ | _0.258_ | _0.356_ | | Raptor | **0.681**| **0.777**| **0.437**| _0.170_ | **0.262**| **0.404**| **0.340**| **0.142**| **0.381**| **0.300**| **0.389**| We again appreciate the reviewer for their words of support for our work. In addition to the improved table, we hope that the additional analyses we shared with the other reviewers have further demonstrated the capabilities of our method, and if deemed so, humbly request that the reviewer consider raising the score as a further vote of confidence. _(Below is in response to a point raised by reviewer zwUQ; we have moved this here due to the space limit, but we think that reviewer 28uY would also be interested to know.)_ > **More evidence for effectiveness with limited training samples:** We agree that a single dataset is insufficient to support our point. Hence we conducted the same experiments (varying training set size) for the CCCC-II dataset (classification) and the white matter category of the UKBB dataset (regression). As shown in the table below, while some methods eventually catch up as the sample size grows, Raptor maintains a clear advantage in low-data regimes for both tasks, underscoring our original claim of data efficiency (best score in bold, second best italicized). **UKBB White Matter ($r^2$)** | Sub | 10 | 50 | 100 | 200 | 500 | 1104 | |-|-|-|-|-|-|-| | SLIViT | _0.070_ | 0.155 | 0.206 | 0.241 | 0.437 | 0.474 | | VoCo | 0.068 | 0.048 | 0.091 | 0.099 | 0.178 | 0.225 | | Merlin | 0.028 | 0.123 | 0.314 | 0.177 | _0.629_ | 0.622 | | MISFM | 0.056 | 0.106 | 0.104 | 0.208 | 0.330 | 0.418 | | SuPreM | 0.059 | _0.305_ | _0.396_ | _0.557_ | 0.593 | _0.646_ | | **Raptor** | **0.193** | **0.414** | **0.446** | **0.588** | **0.634** | **0.681** | **CCCC-II (AUROC)** | Sub | 10 | 50 | 100 | 200 | 500 | 2413 | |-|-|-|-|-|-|-| | SLIViT | 0.483 | _0.861_ | _0.914_ | 0.936 | 0.956 | 0.986 | | VoCo | _0.638_ | 0.797 | 0.819 | 0.817 | 0.861 | 0.879 | | Merlin | 0.509 | 0.499 | 0.483 | 0.492 | 0.484 | 0.927 | | MISFM | 0.494 | 0.821 | 0.826 | 0.900 | 0.965 | 0.975 | | SuPreM | 0.631 | 0.821 | 0.906 | _0.939_ | _0.965_ | _0.988_ | | **Raptor** | **0.706** | **0.917** | **0.939** | **0.955** | **0.982** | **0.997** |
null
null
null
null
null
null
Demystifying Singular Defects in Large Language Models
Accept (poster)
Summary: This paper investigates the phenomenon of high-norm tokens in LLMs, identifying key factors that influence their behavior. These factors include singular directions, negative eigenvalues, and distinct computational pathways for initial and non-initial tokens. The study reveals that high-norm tokens are primarily driven by the leading singular vector of specific model components. These insights have practical implications for enhancing quantization schemes and designing LLM signatures. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your encouraging comments!
Summary: This paper investigates the phenomenon of high-norm tokens in large language models (LLMs), extending the understanding of singular defects from vision transformers (ViTs) to the context of LLMs. Unlike ViTs, where high-norm tokens have been modeled through singular vectors of linear approximations, the causes and characteristics of high-norm tokens in LLMs remain largely unexplored. The authors provide both theoretical insights and empirical validation across various recent models, leading to several key observations: 1. The layer-wise singular direction predicts the abrupt explosion of token norms in LLMs. 2. Negative eigenvalues of a layer explain the sudden decay of high-norm tokens. 3. The computational pathways leading to high-norm tokens differ between initial and non-initial tokens. 4. High-norm tokens are triggered by the right leading singular vector of the matrix approximating the corresponding modules. The authors demonstrate two such applications: improving quantization schemes and designing LLM signatures. The improved quantization strategy selectively preserves precision for critical layers, enhancing robustness without compromising efficiency. Meanwhile, the stable high-norm directions serve as robust signatures to trace model lineage and detect model infringement. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. No issues are found yet. Experimental Designs Or Analyses: Yes. No issues are found yet. Supplementary Material: Yes. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - I think it is a valuable paper that addresses an important yet underexplored topic: the behavior of high-norm tokens in LLMs. The findings provide a lot of new insights and greatly inspires the future works. Weaknesses: - This paper hypothesizes that the causal self-attention might be the underlying reason for the emergence of LLM's high-norm tokens. So I just wonder why ViTs (no causal attention) also have the similar high-norm tokens? Does the causal attention really the key factor of this phenomenon? - I am curious about the high-norm vision tokens in LLaVA-style Large VLMs. The paper shows the difference functions about the high-norm tokens in LLMs and ViTs. While in Large VLMs, some previous works noticed that the vision tokens in VLMs inherit the high-norm tokens greatly from the ViT (the vision encoder). How about the behaviors of these high-norm vision tokens in the language models of VLMs? Are they still working as the vision registers or behaving as the LLM high-norm tokens like claimed in this paper? - What about the differences between the attention sink[1] and the high-norm tokens discussed in this paper? - What is the actual role of the the high-norm tokens of LLMs discussed in this paper? - More discussions are expected, e.g., which kinds of text tokens are easier to trigger the high-norm token phenomenon? [1] Xiao, et al. Efficient Streaming Language Models with Attention Sinks. ICLR, 2024. Other Comments Or Suggestions: N/A Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful comments! We will incorporate the suggestions in the revision. > Is causal attention really the key factor? Why ViTs have similar high-norm tokens? We hypothesize that causal attention excites high norms in LLM in the following way. In the causal formulation, the possible output features at the initial position are finite ($=V$, the vocabulary size, e.g., 32K for LLaMA2). The softmax loss increases the norm of these features for each training sample, so their norms grow fast. By comparison, the possible output features at the 2nd position are 1 billion ($=V^2$). The push towards increasing their norms is thus dispersed, yielding no systematic high norms in the 2nd and subsequent positions. The high-norm tokens in ViTs could stem from different causes as implied by their drastically different patterns. 1. They appear in the late stage of training, while High-norm tokens in LLMs appear in early stage (L324-327). 2. They grow progressively layer by layer, while high-norm tokens in LLMs explode abruptly (L157-159). 3. They appear at random locations, while high-norm tokens in LLMs mainly locate at the starting position (L160). 4. They can be repaired by SINDER without affecting performance, while the presence of high-norm tokens is critical for the performance of LLMs (L310). 5. The single-token assumption in ViTs is an ideal assumption to simplify the analysis. By contrast, in LLMs, the single-token assumption is real and valid for the initial token (L157 Column2). These differences motivate the question "whether the theory of singular defects can be applied to LLMs" (L34-35 Column2) and our study gives an affirmative answer. > How about the behavior of high-norm vision tokens in the language models of VLMs? Are they working as the vision registers or behaving as the LLM high-norm tokens? To answer this question, we forwarded an image-text mixed data using LLaVA-v1.5-7b, and located the image patch token with the highest norm in the ViT features. Here is its norm (round to integer) at different LLM layers: ```txt 764 (norm of input visual token), 764 (after 1st layer of LLM), 764, 765, 764, 764, 764, 762, 761, 760, 758, 756, 754, 752, 750, 748, 746, 744, 741, 740, 738, 738, 737, 736, 734, 734, 732, 732, 732, 730, 730, 729, 728 ``` Unlike the high-norm text token in LLM, whose norm explodes and decays, the norm of high-norm vision token stays nearly the same throughout the LLM. In addition, the angle between the high-norm vision token and the high-norm text token is kept around 87 degrees across LLM layers. Based on these, we hypothesize that high-norm vision tokens neither work as registers nor behave as LLM high-norm tokens. > What about the differences between the attention sink and the high-norm tokens? Attention sink refers to the phenomenon that the initial token receives large attention scores. It is a side-effect emerging from the fact that the initial tokens have high norms, as discussed in the massive activation paper by Sun et al. We believe our efforts on demystifying high-norm tokens provide a better understanding of the underlying mechanism of attention sinks. For example, in Fig. 12, we show that model trained with window attention (Mistral) does not have initial high-norm tokens. As a result, Mistral can process arbitrary context length without the special treatment introduced in the attention sink paper. > What is the actual role of the high-norm tokens in LLMs? In Fig. 13, we show that high-norm tokens appear in the very early stages of training. We therefore hypothesize that they are a way to accelerate network convergence by injecting some biases. To verify this, we added an extra layernorm at the output of the residual branch of each attention block and that of each FFN block to suppress any possible high-norm tokens. We trained the modified LLaMA2-1b model from scratch and observed that the loss decreased more slowly compared to the original structure. > Which kinds of text tokens are easier to trigger the high-norm token phenomenon? Firstly, as shown in Fig. 7, any token in the initial position will trigger the high-norm phenomenon. Secondly, some high-frequency tokens in the corpus such as `\n`, `<s>` may appear as non-initial high-norm tokens. Yet, the set of non-initial high-norm tokens may change during training. For example, in Fig. 13 Column2, the '.' token is a non-initial high-norm token at training iteration 50k, but is no longer a high-norm token at iteration 143k. We will extend Fig. 6 to more LLMs in the revision and below we summarize their non-initial high-norm tokens. |Model|Non-initial high-norm tokens |:-|:- |LLaMA2-7B-Chat|`<s>`, `.`, `。`, `\n` |LLaMA2-7B-Code|`<s>`, `.`, `。`, `\n` |LLaMA2-13B|`</s>` |LLaMA2-13B-Chat|`</s>` |LLaMA3-8B|`<\|begin_of_text\|>` |LLaMA3-8B-Instruct|`<\|begin_of_text\|>` |Phi3-Mini|None |Phi3.5-Mini|None |Phi3-Mini-128k|None |Phi3-Medium|`<\|endoftext\|>` |Qwen2-7B|None |Qwen2-7B-Instruct|None
Summary: This paper is a direct follow-up to SINDER (Wang et al, 2024). In this paper, the authors use the tool of “singular defects” to analyze the occurrence of high-norm tokens in language models, which was observed in (Sun et al. 2024). They analyze the weights of the model to understand where high-norm tokens come from, and how they are decayed. They find that certain weight matrix properties can be used to predict which layers produce high norm tokens. Finally, the authors demonstrate that high-norm tokens can be used to (1) improve quantization by not quantizing the layers responsible for handling the high-norm tokens and (2) act as a signature between language models, since it does not change much in fine tuning. Claims And Evidence: The claims are generally supported by evidence. However, in my view much of the findings are "re-discoveries" of findings that were initially observed in Sun et al. rather than new findings. In particular, Section 3.1 says that it identifies an "explanation for why there exist a set of fixed channels with high activations" - I don't think this is supported by the content of the paper. The fact that there is an explosion layer and a decay layer already was already identifed in Sun et al (2024) (Page 4, Fig 4). The singular defect analysis explains why tokens with high activations in those particular channels do not change, but it doesn't explain why those particular channels contain high activations, or how those occur in the first place. Similarly, the finding in Section 3.3 that self attention doesn't affect high norm tokens seems to be a clear corollary of the finding in Sun et al that these tokens act as "biases" and are somewhat input independent (Page 6). Finally, the notion of the "explosion subspace" (section 3.4) and removing the relevant component is equivalent to the experiment in Sun et al that zeros out the high-norm channels of these tokens and finds a drop in performance; as expected. this finding exactly matches that of Sun et al. In summary, I think a lot of the claims in this paper are certainly backed by evidence; it's just that these aren't new findings and just confirmation of phenomena already observed in prior work, couched in a different theoretical framework. Methods And Evaluation Criteria: This paper is mostly based on analysis and measuring certain properties and phenomena that occur inside a fixed language model. The evaluation of these seems reasonable. This paper doesn’t really require strong benchmarking in its experiments except for the applications section. I am not familiar with quantization benchmarks or LLM signature verification, but those experiments seem reasonable to me. Theoretical Claims: While the “theory of singular defects” is a main tool applied in this paper, it was developed originally in SINDER and does not really require rigorous proof. Although it relies on somewhat strong assumptions (ie single token, heavy linearization) I think it is a reasonable tool to analyze properties of the weights of a model, especially since it clearly identifies which layers are responsible for producing and decaying high norm tokens. Experimental Designs Or Analyses: I think the experiments and analysis are similar to those in SINDER, Darcet et al, and Sun et al; these relate to taking a pre-trained model and probing different parts of it. This seems reasonably sound to me. Supplementary Material: I did not review any additional supplemental material. Relation To Broader Scientific Literature: This paper is a direct extension / follow-up to SINDER. The original papers that described the phenomenon of large-norm tokens were Vision Transformers Need Registers (2023) and Massive Activations in LLMs (2024). These papers explored the existence and probed the function of these high norm tokens in language models. Notably, these papers did not try to find how these high norm tokens came to be during the training process. A more recent paper (Wang et al, 2024), SINDER, introduced the terminology of “defective” tokens and analyzed the weights of the revelation model. SINDER attempts to ‘repair’ the model by imposing a strong regularization on the singular values of the weight matrices. This paper applies the same technique, but to language models. Essential References Not Discussed: I think this paper is missing a key reference: LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale by Dettmers et al. This paper identifies the exact phenomeon in the Applications section - that quantization can be improved by acknowledging the existence of large activations, and keeping those in higher precision. As such, the application related to quantization is not new at all. Other Strengths And Weaknesses: Strengths: The paper does do a careful analysis of the properties of these high norm tokens, and adds valuable analysis. I think the applications section is particularly strong, and it shows that acknowledging the existence of these special tokens is crucial for modifying pre-trained weights. Weaknesses: My first main issue is that many of the empirical findings in this work are not new and were already discovered in Sun et al; the paper should be re-written to reflect that these confirm prior findings and are not observations of new phenomena, as I note in the Claims+Evidence section. My second main issue is that the concept of the paper itself seems rather tenuous. It amounts to looking at the weight matrices and finding that certain values in it lead to large norm tokens, which is not actually the question of interest posed in Sun et al (2024) or Darcet et al (2023). That question is: why does this occur during training? Why do the weight matrices have these properties in the first place?. This more interesting question remains unanswered (as noted in Section 4) and thus limits the value of this analysis towards further understanding the occurrence of high-norm tokens; the analysis in this paper doesn’t really add much beyond what was introduced in the SINDER paper. The contribution of this paper instead is showing that you can apply linear algebra tools to analyze the behavior of activations under very strong assumptions (linearizing the attention and FFN blocks). This was already demonstrated in SINDER, and this paper applies it to the FFN layer. I don’t think there is much more theoretical contribution in this work beyond what was originally proposed in SINDER. Finally, the applications section does not not acknowledge prior work of Dettmers et al (noted in the above essential references section) which already applies mixed precision quantization to handle the existence of particularly high norm features. While this is not exactly the same (high norm channels/tokens vs features), this is essentially the same idea. As such, I rate the paper as a weak reject. Other Comments Or Suggestions: 1. The figures are far too small. These should either be larger or omitted from the paper. Figures 6-9 are also very small; the captions of these figures should not be the same size as the figures themselves. Questions For Authors: The initial token analysis makes sense, since the linearization model only applies to one token. Doesn’t this make the analysis of non-initial tokens very challenging? I’m very confused as to how we can expect to draw any meaningful conclusions from this. Isn’t the finding in section 3.1 / FIgure 4 that high norm tokens shared the same direction already stated in Sun et al? They found that these HNs occur in the same locations in the vectors - this isn’t a new finding. The notion of “empirical high norm direction” is already considered in that work too, just under a different name. Isn’t the finding that “self attention plays an insignificant role in non-initial HN tokens” (Section 3.3) the same as the finding from Sun et al. that high-norm tokens act as fixed biases? They also found that they are independent of input values. Isn’t the finding that removing the “explosion subspace component” degrades performance essentially equivalent to the experiment of Sun et al that zeroes out the high-norm tokens? How does the quantization procedure differ from that of LLM.int8()? This is more of a comment, but the main thing that would change my evaluation of this paper is a better understanding of how it fits in to the broader literature and what exactly new it contributes. The analysis section as I currently understand it doesn’t demonstrate any meaningful new understanding of why high-norm tokens occur in the first place, and many of the experimental findings mirror those identified in Sun et al. What exactly does this work contribute beyond SINDER and Sun et al besides straightforward application to language models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and thoughtful comments and we are encouraged by the remark that "applications section is particularly strong". We will incorporate the suggestions in the revision. > How it fits into the broader literature and what exactly new it contributes While Sun et al. spotted high-norm tokens and SINDER provided a basic tool, our work advances the understanding from three aspects: * **Structural Characterization**. We introduce explosion subspace and decay eigenvalue analysis, providing a structured framework for high-norm token evolution in LLMs. * **Causal Insights**. We conjectured a link between high-norm token emergence and the causal self-attention. * **Practical Applications**. We demonstrate their impact on quantization and introduce model signatures for lineage tracing. We highlight our unique contributions and whether they relate to SINDER and Sun et al. below. |Type|Source|Our Contribution|Sun et al.|SINDER|Comment |:-|:-|:-|:-|:-|:- |analysis|Sec. 3.1|The layer-wise singular direction predicts the explosion direction|x|related|SINDER studied ViT. We extend it to LLMs ||Sec. 3.2|Negative eigenvalues of a layer explain its sudden decay|x|x ||Sec. 3.3|The explosion pathway of the initial token links to self-attention, whereas that of the noninitial high-norm token is unrelated to self-attention|x|x ||Sec. 3.4|High-norm tokens are triggered by the explosion subspace|x|x |observation|L326|High-norm tokens emerge early in training|x|x ||L303 Column2, L379 Column2|The high-norm direction stabilizes during training and is robust to fine-tuning|x|x ||L267-274|A systematic analysis that reveals all noninitial high-norm tokens|partial|x|Sun et al. failed to find tokens `。` and `<s>` for LLaMA2-7b |application|Sec. 5.1|Improvement on quantization schemes|x|x ||Sec. 5.2|Design of LLM signatures|x|x > The linear model applies to one token and makes the analysis of noninitial tokens challenging Fig. 6 shows that the noninitial high-norm tokens retain their high norms after removing all self-attentions. They effectively behave as independent single-token within the network (L262-264), which makes their analysis by linear model feasible. > Sun et al. found that HNs occur in the same locations in the vectors, which is the same as "high norm tokens share the same direction" Their observation is **not sufficient** to reach our conclusion that "high norm tokens share the same direction". Firstly, they show no evidence that the relative magnitude of massive locations is consistent. For example, Tab. 1 and 2 of Sun et al. track the scalar magnitude by sorting, which discards the channel locations. Secondly, they did not quantify the influence of the non-massive locations (which are in majority) on vector direction. As such, they cannot rigorously infer our conclusion. > "Self-attention plays an insignificant role in noninitial HN tokens" is the same as Sun et al's "high-norm tokens act as fixed biases" We respectfully disagree. Our statement refers to the development of noninitial high-norm tokens, namely, they still exhibit high norm after removing self-attention (Fig. 6). Contrastingly, self-attention is **indispensable** for initial high-norm tokens: When it is removed, initial tokens lose high norms (Figs. 7 and 6). This underscores a fundamental difference to Sun et al's. > The findings from removing the explosion subspace are equivalent to Sun et al.'s zeroing out massive activations We will add the following clarification in the revision. "This observation echoes the experiments done by Sun et al., where the authors set the massive activations to zero". > How does the quantization procedure differ from that of LLM.int8()? The core innovation of LLM.int8() is to decompose a matrix into an 8-bit part and a 16-bit part. We did not use this trick. Besides, they rely on row-wise quantization whereas we use tensor-wise quantization (L354-357 Column2). Finally, more than 50% of their matrices are affected (Fig. 3 in their paper), whereas ours only affects 2 matrices. We will add LLM.int8() to the related work. > Why high-norm tokens occur during training in the first place? We have ruled out 11 factors (L308-313 Column2) and singled out the most probable reason: The causal self-attention. It is evidenced by the impact of different attention formulations (L318-329). With causal attention, the possible output features for the initial token are finite ($=V$, the vocabulary size, e.g., 32K for LLaMA2). The softmax loss pushes the norm of these features to grow for each training sample, resulting in high-norms. By comparison, the count of possible output at the 2nd position is 1 billion ($=V^2$), which disperses the push toward increasing their norms. > The "explanation for why there exists a set of fixed channels with high activations" is not supported To avoid confusion, we will revise it to "This explains why the set of high-activation channels observed in Sun et al. is *fixed*". --- Rebuttal Comment 1.1: Comment: The rebuttal mostly resolved my concerns, and I will raise my score to Weak Accept. I think that the paper should still be revised to very clearly delinate the differences between prior work and this paper, which adds an incremental contribution to existing works. Furthermore, the applications section should be seriously re-framed. Currently, it's written as if this paper is the first to observe that high-norm tokens matter for quantizations, which is simply not true. The contribution of the applications section should be framed as incremental improvement on top of existing work. --- Reply to Comment 1.1.1: Comment: We are deeply encouraged that our rebuttal has resolved most of your concerns and we sincerely appreciate your decision to raise the score to 3. In the revision, we will very clearly differentiate our contribution from previous works. Furthermore, we will expand the discussion to related works on the quantization application and properly clarify its improvement on top of the existing works. Thank you again for your valuable time and efforts in reviewing our paper.
null
null
null
null
null
null
null
null
Open Your Eyes: Vision Enhances Message Passing Neural Networks in Link Prediction
Accept (poster)
Summary: The paper introduces a new GNN framework for link prediction tasks that can be used to extend existing architectures. The main idea is that the GNN can access image embeddings of visualizations of the (extended) neighborhoods. These are meant to enrich the representations with more context on where nodes are positioned in the graph and ultimately improve predictive performance. The paper claims that using this technique, state-of-the-art results can be achieved on common link prediction benchmarks. Claims And Evidence: One core claim the paper makes is that using image representations and embeddings from vision models (and the resulting “vision awareness”) improves performance. However, the paper never properly compares to other encoding techniques, like assigning the position of nodes in the visualization directly as features to nodes. The paper also fails to set the vision embeddings into context with other positional encodings commonly used for GNNs. While the performance seems to be good with the image embeddings, it’s not clear where these gains are coming from and whether one really needs to use vision models instead of some more simple and straight-forward encoding techniques. Methods And Evaluation Criteria: The paper proposes an MPNN architecture extension that could be used for any graph learning task. The paper does not convincingly explain why it chooses link prediction as the only task for evaluation. However, the datasets that are used in the evaluation are common and make sense. Theoretical Claims: The only theoretical claims are based around the runtime analysis and expressiveness of the model. Here, the paper does not mention the running time of O(VE), which, while dependent on the used visualizer, seems to be one of the major contributors and could be exemplified at least for the visualization method that is used in the final testing. Regarding the expressiveness (Remark 4.1): This result is not really surprising, as the method also completely breaks permutation equivariance. This fact is not really highlighted as a downside of the method, but would be important in this context. Experimental Designs Or Analyses: * The scalability analysis is generally a welcome addition, but the tests are only done on the smallest graph used in the benchmarking. Moreover, the scaling behavior is only analyzed by increasing the batch size. I’m really wondering: Is there any reason to believe that the methods do not scale linearly with increasing batch size? If the paper wants to analyze the scaling behavior to larger graphs, then one should increase the graph and not the batch size. This could (for example) be facilitated by sampling random graphs of increasing sizes and running the methods on them (for the scalability analysis it doesn’t really matter that much whether there is meaningful ground-truth data to learn). * A comparison to other (more standard) encoding mechanisms for the node positions would be welcome, as well as a comparison to other standard positional encodings used for example for graph transformers. Supplementary Material: I opened the code and had a short look, but didn’t run anything. There don’t seem to be setup instructions for the environment but the repo looks okay overall. Relation To Broader Scientific Literature: The paper is grounded in the broader work on link prediction and extends existing methods to obtain better results. Essential References Not Discussed: In my opinion, this is one of the main weaknesses of the paper. The paper doesn’t mention any recent positional embeddings (for example, the very widely used Laplacian-based features) or other embeddings that were popularized together with graph transformers. Multimodal GNNs, such as those using language embeddings, could also be mentioned. Furthermore, and more importantly, the paper does not discuss graph visualization techniques and how they work. As this is at the core of this paper, I think it would be hugely important to explain what the employed visualization techniques try to optimize. This is its own research area and commonly referred to as “graph drawing”. The paper should mention some algorithms, at least the current state-of-the-art (which also includes some GNN models), and especially those that were used in the paper. The paper could also consider different optimization criteria like stress, edge crossings, angles, overlap between nodes, … and explain which ones were used for the visualizations. In the ablation study for different visualization techniques, the paper refers to them as “graphviz, matplotlib, and igraph”, which really doesn’t sufficiently describe what method was used, as graphviz alone has at least 7 very different algorithms to draw graphs. matplotlib and igraph are not even graph drawing frameworks, so it’s questionable what was used here. The Appendix claims that fdp was used for graphviz, is there any reason for this specific choice? Were the other methods compared to this? Other Strengths And Weaknesses: * GVs don’t usually yield unique and permutation equivariant representations * It is not clear whether the visualization as an image brings any advantage over encoding the node positions (in the visualization) as features Other Comments Or Suggestions: I don't have more suggestions than the ones already mentioned. Questions For Authors: * Why does the paper only consider link prediction tasks? * How do the vision embeddings perform in comparison to other positional encodings (like Laplacian-based ones, distance-based ones, and so on)? ## update after rebuttal I appreciate the authors' responses and want to maintain my positive rating. At the same time, I still feel that limiting the evaluation to link prediction significantly weakens the paper, which is why I don't want to go higher. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful reviews. **Please note that all the new tables and figures mentioned here are put in https://anonymous.4open.science/r/GVN-CLap/README.md**. > compares assigning the visualized node positions as node features. > compares more standard node positional encoding (PE) mechanisms like Laplacian-based and distance-based ones. As you adviced, we compared VSFs for nodes with 4 representative encoding mechanisms: 1) 2-d axis position of nodes in subgraph image, 2) Laplacian PE, 3) Distances to other nodes, and 4) Node degree (centrality encoding). These PEs were used as node features and decoded by 2-layer GCN+2-lyaer MLP for link prediction. The results in Table A show that **VSFs outperform those PEs across datasets**. Notably, directly encoding the 2-d axis is not effective enough, underscoring **the importance of using a vision encoder for comprehensive structural information**. > Why does the paper only consider link prediction tasks? As the title indicates, this work focuses on link prediction within the context of MPNN because: 1. Link prediction is one of the cornerstone tasks in graph learning, with 1) long history, 2) significant applications, and 3) well-established experimental settings, making it vital and representative. 2. As a first exploratory work for incorporating vision to MPNN, we faced many factors to explore. For example, we have revealed the effects on style consistency, node coloring, node labeling, image scopes, feature integration strategies, node shapes, visualizer, and encoder, etc. Efforts on those findings cost 8 months. Therefore, multiple efforts on other tasks are impractical for this starting work given limited resource. Moreover, we provide results in Table B, which demonstrate that **the benefits of VSF are promising for the node classification task**. We will study such extensions in future works. > Is there any reason to believe that the methods do not scale linearly with increasing batch size? While logarithmic axis labels are utilized in Fig. 5, the time actually scales linearly to the batch size. A version of Fig 5 with linear axis labels is provided in Fig. A. > O(VE) should be exemplified. > Welcome to analyzing the scalability on larger graphs by increasing the graph but not the batch size. As you advised, we here supplement another scalability analysis on randomly generated Erdos-Renyi graphs (edge possibility=0.2) with increasing numbers of nodes from 100 to 5000. The total training times for training 200 epochs of GCN/GVN/E-GVN/SEAL are provided in Table C, and the contributions of O(VE) are explicitly listed in parentheses. As shown, the O(VE) time of GVN scale quadratically w.r.t. the node size, making it unavailable for large graphs. Instead, E-GVN can handle large graphs in time comparable to GCN and its O(VE) time scales nearly linearly. > Remark 4.1 is not really surprising, as the method also completely breaks permutation equivariance. > GVs don’t usually yield unique and permutation equivariant representations. We acknowledge that VSFs are not permutation-equivariant. However, our performances have surpassed many permutation-equivariant SF-MPNNs, demonstrating it is not fatal. We think that there are two main reasons: 1) the learnable VSFs can provide some specific information, e.g., important structure patterns/motifs (Remark 4.2). 2) Visualization also imports meaningful data augmentations on permutation (e.g., varying layouts), which forces the decoder model to become less sensitive to node order, potentially alleviating the drawbacks. We will add these discussions with revisions. > Should mention representative works about 1) general positional encoder, 2)GNNs with language embeddings, 3) graph drawing techniques and their optimization targets Incorporating these discussions would definitely provide a more comprehensive background. We will add the discussion with them in the revision. > Matplotlib and igraph are not graph drawing frameworks. which doesn’t sufficiently describe what method was used Matplotlib is used to render images, while networkx handles the structures. For Igraph, it provides a built-in graph visualization function. We will add details about them in the revision. > Is there any reason for selecting fdp for graphviz? We choose fdp since a preliminary experiment comparing E-GVN with different layout algorithms on Cora (Table D) shows dot and fdp are better. Then we select fdp but not dot with reasons: 1. The force-based layout algorithm in fdp highlights node clustering, which is important for link prediction. 2. The tree-based layout in dot often results in a flattened image where the length exceeds the width, leading to a waste of the canvas. Thanks again for your kind reviews. We sincerely hope we have addressed your concerns. If you have any question, please feel free to discuss with us.
Summary: This paper proposes a novel framework called Graph Vision Network (GVN) and its efficient variant (E-GVN) to enhance link prediction in graph neural networks by integrating visual perception. The authors argue that while message-passing graph neural networks (MPNNs) and structural features (SFs) are dominant in link prediction tasks, the potential of visual perception has been overlooked. Claims And Evidence: The claims made in the paper are well-supported by extensive empirical evidence: 1. **Enhancement through Visual Awareness**: The authors claim that incorporating visual perception enhances link prediction performance. This is supported by experimental results showing significant improvements over baseline MPNNs and SF-enhanced MPNNs across multiple datasets. 2. **Compatibility with Existing Methods**: The paper demonstrates that GVN and E-GVN can be seamlessly integrated with existing models like GCN and NCNC, achieving new state-of-the-art results. This supports the claim that visual features provide orthogonal enhancements. 3. **Scalability**: The efficiency of E-GVN is demonstrated through reduced computational complexity and memory usage compared to GVN, making it suitable for large-scale graphs. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem: 1. **GVN Framework**: The method involves converting subgraphs into visual representations using graph visualization tools and extracting visual structural features (VSFs) using a vision encoder. The integration of VSFs with MPNNs is explored through multiple strategies (attention-based, concatenated, and weighted). This approach is logical and well-motivated. 2. **Evaluation Criteria**: The authors use standard metrics (e.g., hit-ratio, MRR) for link prediction and evaluate on diverse datasets (Planetoid, OGB benchmarks). This ensures the robustness of their claims across different graph types and scales. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental designs are sound and comprehensive: 1. **Datasets**: The use of both small-scale (Planetoid) and large-scale (OGB) datasets ensures the evaluation covers a wide range of graph sizes and complexities. 2. **Baselines**: The comparison with strong baselines (e.g., GCN, NCNC) and various integration strategies provides a thorough analysis of the proposed methods' effectiveness. 3. **Ablation Studies**: The paper includes ablation studies on visualization styles, scopes, and adaptivity of VSFs, providing insights into the design choices and their impacts. Supplementary Material: Code. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Strength: Good performance and efficiency Weakness: Since the method is a plugin outside the MPNN, the novelty of GNN structure is limited but not fatal. I will still give a positive score. Other Comments Or Suggestions: I think the authors should focus on the efficient GVN in the paper architecture, since it can get better performances with NCNC and much more efficient. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. > I believe the authors should concentrate on the efficient GVN within the paper's architecture, as it can achieve better performance with NCNC and is significantly more efficient. We sincerely appreciate your suggestion. We will focus on highlighting the efficient GVN in the revision. Thanks again for recognizing our work.
Summary: This paper proposes using visual structural features (VSFs) as a replacement for heuristic-based structural features (SFs) in graph learning tasks. The key contribution is the introduction of vision-based enhancements, which are empirically shown to improve message-passing neural network (MPNN) performance for link prediction. Claims And Evidence: The paper investigates whether and how visual awareness of graph structures benefits MPNNs in link prediction. The authors provide empirical evidence demonstrating improvements over baseline models in Section 5. However, the justification for why VSFs work better remains unclear, particularly in relation to existing expressive power analysis on models with SFs. Methods And Evaluation Criteria: The paper utilizes standard link prediction benchmarks for performance comparison, making its evaluation broadly relevant. However, more details on how hyperparameters and configurations were chosen would strengthen the evaluation. Theoretical Claims: The theoretical justification for VSFs in Section 4.2 is grounded in prior work on subgraph-based methods (e.g., SEAL and labeling tricks). However, the expressive power of models incorporating VSFs remains unclear, especially given their black-box nature and potential randomness in mapping and encoding. This raises concerns about whether VSFs fundamentally improve representation power or simply introduce additional complexity without well-characterized benefits. Experimental Designs Or Analyses: The scalability analysis is incomplete. While inference time and GPU memory usage are reported, key details such as the configuration of VSF and the actual preprocessing overhead of VSFs are missing. Figure 5 lacks clarity regarding which encoder and number of hops for VSF were used. Since the primary computational cost comes from VSFs, explicitly reporting their preprocessing runtime would provide a clearer picture of the method’s efficiency. Supplementary Material: I reviewed all sections of the supplementary material but found no runtime comparisons between VSFs and classical SFs. Additionally, implementation details and hyperparameter settings for Section 5’s results are missing, which makes reproducing results and further evaluation difficult. Relation To Broader Scientific Literature: This paper presents an interesting adaptation of visual features to enhance MPNN-based link prediction. Essential References Not Discussed: Several relevant works are missing from the discussion in Sections 4.3 and 4.4: • BUDDY [1] and Bloom-MPNN [2] explored different strategies for integrating SFs with MPNNs, but they are not cited in relevant sections. • SUREL [3] introduced the idea of decomposing query-induced subgraphs into node-level subgraphs for efficiency, which directly relates to the node-centered visualization approach in E-GVN (Section 4.4). [1] Chamberlain, Benjamin Paul, et al. “Graph neural networks for link prediction with subgraph sketching.” ICLR’23. [2] Zhang, Tianyi, et al. “Learning Scalable Structural Representations for Link Prediction with Bloom Signatures.” WWW’24. [3] Yin, Haoteng, et al. “Algorithm and system co-design for efficient subgraph-based graph representation learning.” VLDB’22. Other Strengths And Weaknesses: • Strengths: The idea of leveraging vision models for structural feature extraction is straightforward and empirically effective based on the reported results. • Weaknesses: The paper lacks a clear justification linking the empirical improvements of VSFs to existing analyses of SF-based methods. Additionally, the increased computational complexity—due to large vision models such as ViTs—raises concerns about whether the performance gains justify the additional overhead. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does VSF handle hub nodes, such as influencers in social networks or highly cited papers? Could the limited scope of visual perception cause nodes with non-isomorphic subgraphs to be mapped to non-distinguishable VSF? 2. How well does VDecoder perform as a standalone model on benchmark datasets? 3. Considering VSF requires style consistency, wouldn’t node-centered subgraph visualization degenerate back to standard MPNNs for nodes with isomorphic subgraphs? 4. Could the authors provide a direct runtime comparison of preprocessing VSFs vs. classical SFs? Additionally, what are the vision model configurations and hyperparameters used in Table 3, Figure 5, and other results in Section 5? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for your insightful reviews. We here address your concerns point by point by merging and re-arranging relevant comments together: > The justification for why VSFs work better remains unclear, particularly in relation to existing expressive power analysis on models with SFs. > The expressive power of models incorporating VSFs remains unclear, raises concerns about whether VSFs fundamentally improve representation power or simply introduce additional complexity without well-characterized benefits. > How well does VDecoder perform alone? First, we want to clarify that we **did not claim VSFs always have stronger expressive power than other SFs**, instead, our main claims for VSFs include 1. **When used alone, VSFs benefits MPNNs**. (Qualitatively demonstrated in Sec. 4.2, remarks 4.1-4.4). 2. **When used together with other SFs, VSFs still provide remarkable orthogonal improvements.** (Empirically demonstrated in Sec. 5). Second, the proposed VSFs show different characteristics from classic SFs: 1) Classic link prediction SFs are typically derived from a single type of heuristic (e.g., SFs in SEAL only encode path distance, and SFs in BUDDY only focus on high-order common neighbors). However, VSFs encode the whole subgraph from images, containing **massive and hybrid types** of structural biases (as demonstrated in Fig.4). 2) With training on specific data, VSFs **can vary** to align the data requirements (Remark 4.4). Due to such characteristics, it is not easy to theoretically analyze the properties of VSFs precisely, and instead we conducted empirical demonstrations (Sec. 4.2 and 5) to describe properties of VSFs. Lastly, **we provide the standalone performance of 2-hop VSFs from ResNet50 encoder in the following table, which is a direct comparison for the representation quality of different SFs on the link prediction task.** As illustrated, both individual link-based and node-based VSFs proposed in this paper outperform common neighbor SFs (i.e., CN, RA, and AA) and distance-based SFs (i.e., SPD) across various datasets, showing their superior structural representations beyond classic link prediction SFs. |SFs|Cora|Citeseer|Pubmed| |---|---|---|---| |CN|33.92±0.46|29.79±0.90|23.13±0.15| |AA|39.85±1.34|35.19±1.34|27.38±0.11| |RA|41.07±0.48|33.56±0.17|27.03±0.35| |SPD|29.97±0.76|41.37±0.81|40.21±0.59| |VSF(link)|**68.24±0.41**|**57.66±0.47**|**46.81±1.02**| |VSF(node)|65.98±0.72|55.51±0.91|45.24±0.44| > Implementation details and hyperparameter settings for Sec 5’s results are missing, which makes reproducing results difficult. > What are the vision model configurations and hyperparameters used in Table 3, Fig. 5 and other results. > Explicitly provide a direct runtime comparison of preprocessing VSFs vs. classical SFs. Thanks for your suggestion. 1. For hyperparameter configurations, please note that we have included **all the hyperparameters for the main results in ./scripts/ path of our submitted code repo. We also give the ranges and optimization approaches of the significant hyperparameters in Sec 5.1. Therefore, we think that it is easy to reproduce our results**. For the scalability analysis in Fig.5 and other experiments, the encoder is ResNet50 by default as mentioned in Sec. 4.2, and the default hop is 2. 2. Pre-processing time compared with SFs: We provide the time comparison on pre-processing VSFs and classic SFs on cora and ogbl-ddi testing in the following table. As shown, **E-GVN can process VSFs in comparable time (in seconds) as classic SFs. For the dense graph like ogbl-ddi, it is even more efficient** because the number of nodes are much smaller than that of links. |Dataset|VSF(GVN)|VSF(E-GVN)|CN|RA|AA|SPD| |---|---|---|---|---|---|---| |Cora|2.91e1|5.02e-2|**1.69e-2**|1.81e-2|1.89e-2|3.06e-2| |DDI|4.39e4|**6.20e0**|1.46e2|1.95e2|2.07e2|4.46e1| > How does VSF handle hub nodes? As exemplified in Fig 2, hub nodes are faithfully reflected in the image, revealing important cluster information. > Could the limited scope of visual perception cause nodes with non-isomorphic subgraphs to be mapped to non-distinguishable VSF? No. For any hop k, non-isomorphic subgraphs are visualized as different images, resulting in distinguishable VSFs. > Would node-centered visualization degenerate to standard MPNNs for nodes with isomorphic subgraphs? No. Though we eliminated node labels for structural clarity in visualization, we constructed the adjacent matrix with ascending order of node labels. This makes the visualization for isomorphic subgraphs different in layout if their relative label orderings are not permutation-equivariant. > Several relevant works are missing from the discussion in Sec 4.3 and 4.4. Thanks for your supplement. Incorporating these discussions can definitely make the discussion more comprehensive. We will add discussions with them in the revision. Sincerely wish we could address your concerns. For any further clarification, please feel free to let us know.
Summary: This paper proposed to incorprate vision information into MPNN to enhance link prediction. Specfically, it designed two framework, Graph Vision Network(GVN),along with a more efficient variant (E-GVN). Claims And Evidence: This paper analyzes the potential benefits to incorporate vision awareness in link prediciton and provides some empirical evidences. This paper also demonstrated the effectiveness of the proposed GVN and E-GVN across several link prediction datasets, and empirically show the vision awarness can bring orthogonal improvements to SOTA methods. Methods And Evaluation Criteria: Yes, the empirical study can well support the effectiveness of the proposed method. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental designs are overall thorough and convincing. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper proposed that incorporating the vision modality into MPNN can enhance link prediction, and support it via emipiral observations and experrimental studies. Essential References Not Discussed: NA Other Strengths And Weaknesses: Pros: 1. This paper proposed a novel idea to incorprate the vision modality into MPNN for link prediction, and demonstrate its rationality and effectiveness via empirical study. 2. Most of the intuitions are well supported and discussed via empirical study and preliminary analysis. 3. The experiments are carefully designed from different perspectives to support its design and motivations. Cons: 1. The discussion for RQ1 could be more convincing if some empirical study or theoretical analysis could be provided, instead of just intuition analysis. 2. More link prediction methods that considering the graph sub-structures could be included in the comparison. Other Comments Or Suggestions: 1. The discussion for RQ1 could be more convincing if some empirical study or theoretical analysis could be provided, instead of just intuition analysis. 2. More link prediction methods that considering the graph sub-structures could be included in the comparison, such as Link-MOE ("Mixture of Link Predictors on Graphs"). Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful reviews. > 1. The discussion for RQ1 could be more convincing if some empirical study or theoretical analysis could be provided, instead of just intuition analysis. Thanks for your advice. For RQ1, you can find the following supports for our discussion. 1) For subgraph visualization, we have provided empirical ablation for **keeping style consistency** in Table 4 of Sec 5.3, where Appendix H provides more details. Appendices I.1 and I.2 provide the empirical study on color and shape influences on nodes, particularly demonstrating the significance of **highlighting the center nodes**. Appendix I.3 provides an ablation study on **node labeling strategies**, noting that eliminating node labels performs the best. 2) For decoupled vision scope, we exemplified the actual cases in Figure 2 to support our claim about the vision scope and provide a sensitivity study in Table 5 of Sec 5.3. The theoretical analysis of decoupling subgraph scope from MPNN has been proven by [r1], and we omitted the details due to the page limit. We will add them in the revision. [r1] Decoupling the depth and scope of graph neural networks. NeurIPS 2021 > 2. More link prediction methods that consider the graph sub-structures could be included in the comparison, such as Link-MOE ("Mixture of Link Predictors on Graphs"). As suggested, we have included Link-MOE in the comparison. Please note that Link-MOE integrates many MPNNs as experts to achieve good performance. Differently, as a plugin method outside MPNN, our proposed framework can be a supplement for enhancing **individual** MPNN to provide additional enhancements. Some results (i.e., Hits@50 for ogbl-collab, Hits@100 for ogbl-ppa) are shown in the following table, where we substitute the GCN and NCNC experts in Link-MOE as E-GVN(GCN) and E-GVN(NCNC). According to the results, we can see that incorporating VSFs can further enhance the performance of Link-MOE. ||ogbl-collab|ogbl-ppa| |---|---|---| |Link-MOE|71.32 ± 0.99|69.39 ± 0.61| |Link-MOE+VSFs|71.84±0.85|70.06±0.56|
null
null
null
null
null
null
Toward Foundation Model for Multivariate Wearable Sensing of Physiological Signals
Reject
Summary: This work presents a foundation model capable of handling all physiological signal modalities. To deal with varying signal modalities, several modules were introduced. First attempt at zero-shot evaluation of physiological signal was reported. Claims And Evidence: As a foundation model, the pre-trained weights should demonstrate better adaptation to downstream tasks than a randomly initialized model of the same architecture. An ablation to showcase how much the pre-trained weights help with the downstream adaption should be demonstrated, otherwise the pre-training would not be justified. Methods And Evaluation Criteria: It is unclear what the tasks were performed for each of the evaluation datasets. Perhaps I missed them, but more details on the evaluation and pre-training datasets should be provided, such as the number of classes, what modalities were used. Theoretical Claims: No theorem was claimed or proved. Experimental Designs Or Analyses: Some baselines, such as CHRONOS, is a univariate time series model. It would be important to specify which channel/lead and modality was used to obtain the results from Table 3. The comparison would be rather unfair if the proposed model uses multimodal multivariate data while the baselines use unimodal univariate data. Supplementary Material: The supplementary materials include source code. Relation To Broader Scientific Literature: None. Essential References Not Discussed: Comparison to other domain-specific baselines, such as EEG, ECG, PPG foundation models (LaBraM, ECG-FM, PaPaGei…) would be important to provide the readers an idea of how well the proposed work performs in relation to them, even if they serve as upper bounds. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: In Table 2. Brain-Cognitive dataset, its reference is (Dar et al., 2022), but the link appears to point to an ECG and PPG dataset Questions For Authors: In Section 3.4, it states “we use off-the-shelf frozen encoders for both signal and text modalities.” The text encoder is GPT3.5, what signal was used? From Figure 11, it seems that the NormWear can handle a maximum of 10 channels (PPG, ECG, GSR, 3x ACC, 4x EEG). What if my data has more than 4 EEG channels or 1 ECG channel? Where is the reference electrode for EEG? All input data was pre-processed to 65Hz for 6 seconds. What if my task requires longer input windows? For example, sleep staging typically use 30 second windows. The input to NormWear is physiological signals, without telling the model which channel or modality, correct? The idea for zero-shot is to train the linear mapping to keys, value and likelihood parameter so that information can be aggregated automatically. ## update after rebuttal I appreciate the authors' responses. It has addressed several of my concerns/confusion. I have raised my score to reflect this. However, there are some concerns that remain unaddressed. Hyperparameters can significantly impact model performance. Using the same hyper parameter for all models and datasets is not rigorous, especially for the domain specific baselines. For WESAD, the reported results in PaPaGei was for binary classification and the results reported in this paper was for three classes, so they are not comparable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and for acknowledging our idea. We have addressed and clarified the comments below. ## 1) Ablated FT w/ pre-trained weight v.s. FT w/ random init weight We appreciate the suggestion to include an additional ablation study. Our work focuses on linear probing to evaluate representation quality without fine-tuning, which is a standard and statistically sound approach in self-supervised learning, as noted by reviewers 8Rhq and eZc9. Although fine-tuning is not our main focus, we agree it helps demonstrate the benefit of pretraining. To address this, we will include an example in the appendix: on the PPG_HTN dataset, fine-tuning NormWear achieves an AUROC of 0.65, compared to 0.56 from random initialization. This highlights the practical utility of our model beyond the primary evaluation setting. ## 2) More details in downstream dataset We have included initial descriptions of the pretraining and evaluation datasets, including their domains, modalities, sample sizes, and subject counts, as shown in Tables 1 and 2. We appreciate the suggestion and agree that adding more detail would improve clarity. In response, we will revise the Appendix to provide more comprehensive descriptions of each dataset. Due to space limitations here, we include a concise summary table of the downstream tasks below. |Downstream Tasks|Task description|num_classes|num_channel| |----------------|----------------|-----------|-----------| |WESAD|Stress Detection|3|10| |UCI-HAR|Activity Recognition|6|6| |Driver Fatigue|Fatigue Classification|2|4| |Eye-open-state|Eye State Detection (Open/Close)|2|1| |Eye relaxation|Eye Fatigue Detection|2|1| |Epilepsy- health area|Brain Region Health Classification|2|1| |Epilepsy-tumor area|Brain Tumor Detection|2|1| |Epilepsy-seizure|Seizure Detection|2|1| |Gameemo|Emotion Recognition|4|4| |ECG-Abnormal|Heartbeat abnormal detection|2|1| |PPG-HTN|Hypotension stage classification|4|1| |PPG-DM|Diabetes Detection|2|1| |PPG-CVA|Brain Stroke Detection|2|1| |PPG-CVD|Cardiovascular Disease Classification|3|1| |PhysioNetEMG|Muscular Disease Classification|3|1| |Noninvasive-BP|Blood Pressure Estimation|Regression|3| |PPG-HGB|Hemoglobin Estimation|Regression|2| |Fetal-fPCG|Fetal Heart Rate Estimation|Regression|1| ## 3) Clarifying Baseline Application on Multivariate Data For univariate baseline models such as TFC and Chronos, we applied each model separately to each modality to extract representations specific to that modality. We then concatenated the resulting representations and used a linear classifier for downstream evaluation. This setup ensures a fair comparison with our method [see "Uni-modal baselines" in the response to reviewer 8Rhq]. We will revise Section 4.1 to make this more clear in the manuscript. ## 4) Comparison to domain-specific baselines We appreciate the suggested baselines and references. We have added ECG-FM, PaPaGei, and CbraMod as baselines for ECG, PPG, and EEG, respectively. As shown in the updated results [see “Baseline Comparison with single modality foundation models” in response to Reviewer eZc9], these models outperform previous benchmarks, but NormWear still achieves the best performance, demonstrating strong generalizability. We will include and discuss these models in the Related Work section and are happy to add any further related works the reviewer suggests. ## 5) Dataset reference Thank you for pointing out this oversight. We have corrected the dataset citation in the manuscript. ## 6) Clarifying off-the-shelf frozen encoder in zero-shot inference Our text encoder is the Tiny Clinical LLaMA model, and our signal encoder is our proposed Normwear model. We utilized GPT-3.5 solely for data augmentation purposes on text prompts and did not use it as an encoder. We will revise Section 3.4 of our manuscript to clearly distinguish these components. ## 7) Input and Channel Flexibility NormWear is flexible with respect to both input channels and sequence length. It can handle any number of input channels, and the use of 10 channels in our experiments reflects the dataset rather than a limitation of the model. The 6-second input window was selected for implementation efficiency, but longer sequences can be supported through positional embedding interpolation. In addition, we want to highlight that NormWear does not rely on explicit channel labels to distinguish modalities. This is justified by our channel-shuffling experiment[see "Technical clarifications" in response to reviewer 8Rhq]. We will make sure to include an example using a more complex dataset in the appendix to serve as a practical guide for researchers applying our model to real-world data. Thank you once again for your invaluable feedback. We have incorporated the suggested revisions and all the related citations accordingly. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. With regards to domain-specific baselines, could the authors clarify what metric is used and how the baseline results were obtained? Specifically, how is the data preprocessed, and were the models linear probed or fine-tuned, how were the hyper parameters chosen? The signal specific result of PaPaGei for WESAD appears rather odd. As for input channel flexibility, it was claimed that "channel-aware attention layer that enables the model to process arbitrary multivariate inputs". I think the methods section can further elaborate how this is achieved. In particular, if a channel was not seen during pre-training, how can the model extract representation for the channel for downstream tasks? Furthermore, if my downstream dataset contains more than 10 channels, can I and how to use the existing model? I think these are important practical considerations that would be of great interest to the readers who might want to use the proposed model. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's continued engagement with our work. ## 1) Technical clarification ### a) Baseline preprocessing To ensure fair comparison between model, the signals are passed through same basic processing pipeline as described in Appendix A except the resampling step that the signals are resampled to the corresponding sampling rates stated in the baselines’ papers (125Hz for PaPaGei, 500Hz for ECG-FM, 200Hz for CBraMod). Following the protocol established in early work [1], we use linear probing to evaluate all the models including NormWear, and AUC-ROC is reported. Supplementary metric was originally reported in Appendix B Table 12, which will be updated with modality-specific results. ### b) Evaluation and hyper-parameter setting To ensure fair comparison, we use the same training and test splits (split the subjects into train and test) across all models. To avoid introducing bias from varying hyperparameter choices, we used the default setting and kept it fixed across all models (including NormWear) and downstream datasets: (1) LogisticRegression: {’penalty’: ’l2‘, ’C’:1.0, ’solver’: ‘newton-cg’, ’maxiter’: 500}. (2) Ridge Regression: {’alpha’: 1.0, ’solver’: ’cholesky’,’maxiter’:500}. ### c) PaPaGei performance on WESAD As mentioned above, we run Papagei-S with the same above settings. The large performance gap (0.761 v.s. 0.567) between Normwear and Papagei-S is caused by Normwear’s ability to take not only PPG, but also other informative signals (accelerometers, GSR, and ECG). In addition, the WESAD performance we reported on Papagei-S is indeed consistent within the range (0.53 - 0.58) as reported in their original paper. The minor performance discrepancy may be due to differences in task formulation: we follow the WESAD 3-class classification setting, while Papagei's paper reformulates the problem into binary classification based on valence and arousal scores. Given these differences, our results are well-justified and consistent with prior works. ## 2) Channel flexibility methodology We appreciate the reviewer's suggestion. Here we provide a brief overview of the elaboration, and detailed description will be updated in the main manuscript: "Consider we have an arbitrary $c$-channel input with shape $[N\times c\times seq$_$length]$, and is initially becoming the shape of $[N\times c\times P\times 768]$ after cwt-processing and patching, where $N$ is the batch size and $P$ is the number of patches. Our encoding block is then composed with the following steps: 1) Duplicated the shared [CLS] token c times and copied to each sample, hence, resulting in the shape of $[N\times c\times 768]$ where 768 is the global latent size. These tokens are then prepended at each channel, making the input data, denoted as $x$, shape of $x \in \mathbb{R}^{N\times c\times (P+1)\times 768}$. 2) Our first intra-channel encoder, denoted as function $f_{intra}$, processes each channel independently by first reshaping the input to shape of $[(N\*c)\times (P+1)\times 768]$, capturing channel-specific waveform pattern features, resulting in the shape of $f_{intra}(x) \in \mathbb{R}^{(N\*c)\times (P+1)\times 768}$. Until this point, the shared [CLS] token which has been duplicated into each sensor channel will fuse channel-specific information through the first intra-channel attention mechanism, hence becoming modality-specific special token representations. 3) What is followed is the inter-channel fusion mechanism which could only take these modality-specific [CLS] tokens and conduct information exchange through an inter-channel attention mechanism, resulting in a tensor of shape $[N\times c\times 768]$ which will be put back to $f_{intra}(x)$ to replace the [CLS] representation before the inter-channel information exchange encoding. Notebly, in the technical implementation, self-attention is inherently both length-agnostic and order-agnostic, meaning it can seamlessly encode an arbitrary number of tokens. Throughout this entire encoding process, the number of channel $c$ never interferes with the encoding logic, hence, the encoding methodology works for arbitrary $c$. Regarding unseen channels, Normwear is designed to learn generalizable features during pre-training, enabling effective extraction and representation of previously unseen sensor inputs. As described above, the architecture design of Normwear naturally adapts to new input channels. As a proof, in our downstream dataset, UCI-HAR contains gyroscope data and PhysioNet-EMG contains EMG data, which are unseen during pre-training. Despite this, NormWear achieves promising performance on both datasets under linear probe evaluation." We hope the clarifications have adequately addressed the concerns, and we would be honored if the reviewer finds our improvements worthy of reconsideration, thank you! ## Reference [1] Chen, X., Xie, S., & He, K. (2021). *An Empirical Study of Training Self-Supervised Vision Transformers*.
Summary: The paper introduces NormWear, a foundation model for multi-variate wearable signals. The model is trained using a masked reconstruction (self-supervised) loss on signal sources including ECG, PPG and IMU, taken from 11 wearable datasets. NormWear is fine- evaluated on downstream tasks such as state recognition from EEG and abnormal ECG detection using linear probing. The authors also introduce a ‘zero-shot’ mechanism for classification using a learnt module that aligns signal representations with text representations. Claims And Evidence: “previous works [..] do not capture the complex relationships between signals from sensors located on different body parts. These two limitations of recent approaches hinder their generalization and usefulness for wearable health monitoring.” Most of the baseline tasks are uni-modal. There is little evidence that the multi-modal approach of NormWear leads to better downstream uni-modal performance, e.g. that pre-training with ECG data improves performance on EEG-related tasks and vice versa. “NormWear [is] the first to achieve zero-shot inference on wearable sensing tasks” Comparing Tables 3 and 4, it seems that simple statistical baselines consistently outperform zero-shot NormWear. Table 11 of the Appendix seems to indicate that an even simpler baseline of demographic features also often outperforms zero-shot NormWear. Unfortunately, it doesn't seem like the proposed method works. Methods And Evaluation Criteria: The selected baselines are weak. Comparing against fully-supervised baselines using varying quantities of labelled data (e.g. 1%, 10%, 100%) is a standard method of assessing the learnt representation of foundation models which could be included. Meanwhile, for specific downstream modalities, Papagei [1] and ECG-FM [2] could be appropriate baselines for PPG and ECG. [1] A. Pillai, D. Spathis, F. Kawsar, and M. Malekzadeh, ‘PaPaGei: Open Foundation Models for Optical Physiological Signals’. [2] K. McKeen, L. Oliva, S. Masood, A. Toma, B. Rubin, and B. Wang, ‘ECG-FM: An Open Electrocardiogram Foundation Model’. Theoretical Claims: N/A Experimental Designs Or Analyses: Both the linear probing (Table 3) and zero-shot experiments (Table 4) appear statistically sound. For supervised fine-tuning experiments, how were the downstream task datasets partitioned into train, val and test sets? Table 1 indicates that there are many more samples than subjects. Were the sets randomly split by subjects or by segments? Supplementary Material: I have read the supplementary material. Much of the content needs tidying before publication. For example, the results of Table 11 are presented with little introductory commentary. What is the motivation for this experiment? What are the demographic features? What size of dataset do Medium and Large correspond to? “In Table 7, we include the results from conducting the statistical test across different task groups (the groups were highlighted with different colors in the tables in main sections) and the total average scores.” I think this is meant to be referring to Figure 7. In Table 13, what are the RAM columns measuring? Peak RAM usage? Why is performance so slow using a Jetson Nano GPU? I have used these devices in the past and know they can run similar models with high inference throughput. CPU and EDGE CPU configurations are reported as using 9.12 MB RAM (are these meant to be identical?) but when running on CPU the peak RAM usage must be higher than this because from the VRAM column it uses several 100 MB. These stats do not look plausible. Relation To Broader Scientific Literature: Prior work has investigated foundation models for wearable signals, including multi-modal configurations, many of which are cited. SleepFM [1] is another important recent example of a multi-modal foundation model that should also be cited. Meanwhile, supervised learning works have proposed architectures that can handle heterogeneity in multi-modal wearable time series analysis from physiological signals, including via a similar CLS-style fusion mechanism [2]. This paper goes a step further and introduces a foundation model that can handle heterogeneity in wearable signals. It also proposes a novel approach to enable the model to be used for zero-shot inference. However, as discussed above, the performance of this approach is poor. [1] R. Thapa et al., ‘SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals’. [2] J. F. Carter and L. Tarassenko, ‘wav2sleep: A Unified Multi-Modal Approach to Sleep Stage Classification from Physiological Signals’. Essential References Not Discussed: No essential references were missed, but additional contextual suggestions are provided above. Other Strengths And Weaknesses: The presentation of statistical results could be improved. For example, Table 3 displays methods in the columns, and Table 4 displays methods in the rows. Task results are in different orders between tables. Numerical precision is inconsistent. Table captions are too small. Other Comments Or Suggestions: L195: “However, this method requires quadratic computation time, as every token passes through the self-attention module, making it impractical for real-world applications.” Unsupported claim. Ongoing innovations like FlashAttention are addressing computational issues of long context. Dependent on the hardware used and the size of the model. Questions For Authors: 1. Are existing baselines like Chronos re-trained on the same dataset? Or is reported performance using them ‘off-the-shelf’? 2. For downstream tasks like ECG abnormal detection, does multi-modal pre-training on sensors such as EEG improve performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful feedback. We appreciate your recognition of our method's novelty and the sound design of our experiments. Below, we briefly respond to each of your concerns. ## 1) Additional Baselines Thank you for the constructive suggestions on the additional appropriate baselines, we have updated the results in the following: - Signal Specific: The updated result table is here: [cf. “Baseline Comparison with single modality foundation models” in response to Reviewer eZc9]. - Semi-supervised learning experiment: Given the distribution of labels in our dataset, we selected training subsets (10%, 100%) that maintain sufficient per-class samples for reliable model evaluation. Due to the space limit here, we present only the macro average scores. We will include all the setting details and full result in our manuscript. : |Model|TF-C|Signal Specific|NormWear| |-----|----|----------------|--------| |10%|77.787|79.939|**80.047**| |100%|81.230|82.401|**84.381**| |Relative improve|4.43%|3.08%|**5.41%**| ## 2) Technical Clarification For all the downstream dataset, we split the data into train and test sets by subjects instead of by segments. And we leverage the pretrained weights released by the official baseline models “off-the-shelf” (same case for NormWear during evaluation), with all the linear output heads being re-trained on the training sets. ## 3) Concerns regarding zero-shot We appreciate the reviewer’s attention to the results. However, we’d like to clarify that the comparison between zero-shot NormWear and statistical baselines in linear probing is not an apples-to-apples comparison. Zero-shot predictions are made without task-specific supervision (i.e., performance when the model has never seen the task before during training), while statistical baselines benefit from supervised learning. While there remains room for improvement in zero-shot performance, our work represents the first demonstration of zero-shot capability in the wearable signal domain, an aspect not present in recent studies (cf. included the most recent baselines proposed by other reviewers (reviewer eZc9, xtPQ, bfFv). Despite the highly challenging zero-shot inference setting, Normwear outperforms the baseline by 18.34%. Moreover, when provided with only 10% of labeled data and training only a linear classifier head, Normwear achieves a significant improvement of more than 25%. We included the zero-shot analysis in the spirit to provide our readers with a full picture of the upper bound of our model performance in a very difficult Zero-shot setting. We recognize that further research focusing on the zero-shot performance alone is warranted which remains outside of the scope of this current investigation. ## 4) Motivation for demographic ablation study We thank the reviewer reviewing our supplementary material carefully. We will improve it by adding more discussion on the implications of the additional analyses. For example, we will include more context on Table 11 as suggested. Specifically, we found that several previous works [1][2] have used learned representations to infer demographic labels. These results suggest that wearable signals do contain demographic information. In Table 11, we wanted to investigate that NormWear does not just extract demographic information (e.g. age, sex, height, etc. depending on what is available within each dataset), hence indicating that the representation that our proposed model extracted and the demographic could be used as complementary features to each other during downstream modeling. The phrases “Medium” and “Large” refer to Normwear’s pretrained checkpoint on 2.58 millions and 8.97 millions of signal segments respectively. We’ve added these clarifications in the appendix. ## 5) Hardware runtime concern The current scope of the paper does not address deploying the models on wearables or IoT platforms, but rather focuses on cloud-based analysis. The scalable runtime analysis was included for comparison purposes, but we understand it has caused confusion. We will remove it and instead mention developing a real-time, IoT-compatible compressed model through pruning, quantization, or knowledge distillation as part of future work. ## 6) FlashAttention Our statement refers specifically to the quadratic complexity of self-attention with respect to sequence length and the number of sensors, not as a critique of attention mechanisms themselves. Our approach circumvents this by quadratic scaling only with the number of sensors [as shown in Figure 2(b) in the paper], making it more practical for our use case. We sincerely appreciate your valuable feedback. We have incorporated the suggested revisions and all the related citations accordingly. ## Reference [1] Narayanswamy, Girish, et al. "Scaling Wearable Foundation Models." ICLR 2025 [2] Abbaspourazad, Salar, et al. "Large-scale Training of Foundation Models for Wearable Biosignals." ICLR 2024 --- Rebuttal Comment 1.1: Comment: Dear authors, **"we split the data into train and test sets by subjects"** Could you clarify how hyper-parameters (model architecture, epochs, learning rate, etc.) were chosen? Was there a distinct validation set? **"Despite the highly challenging zero-shot inference setting, Normwear outperforms the baseline by 18.34%"** While I recognise the authors' hard work in implementing and evaluating their zero-shot technique, the fact that it outperforms some baseline does not inherently mean it is a significant result for the community. As noted in my original review, very simple baselines strongly outperform the zero-shot approach. The zero-shot results are very far from being clinically useful. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's continued engagement with our work, and are glad to see that our additional experiments following the reviewer’s suggestion and clarifications have substantially improved our manuscript. We’re now pleased to focus on the discussion regarding the remaining concerns in the reviewer's follow up comment. **“Could you clarify how hyper-parameters (model architecture, epochs, learning rate, etc.) were chosen? Was there a distinct validation set?”** Thank you for the opportunity to clarify. Following the protocol established in early work [1], we use linear probing to evaluate all the models including NormWear. To ensure a fair comparison, we use the same training and test splits across all models. To avoid introducing bias from varying hyperparameter choices, we used the default setting and kept it fixed across all models and downstream datasets: (1) LogisticRegression: {’penalty’: ’l2‘, ’C’:1.0, ’solver’: ‘newton-cg’, ’maxiter’: 500}. (2) Ridge Regression: {’alpha’: 1.0, ’solver’: ’cholesky’,’maxiter’:500}. **“As noted in my original review, very simple baselines strongly outperform the zero-shot approach.”** We would like to clarify that the comparison made by the reviewer in the original review between zero-shot NormWear and statistical baselines in linear probing is not a direct apple-to-apple comparison. Zero-shot inference, by definition, operates without any task-specific supervision, whereas statistical baselines leverage supervised learning, making their stronger performance in such settings unsurprising. Regarding the simple baseline in Table 11 mentioned by the reviewer in the original review, it is a completely different ablation study checking the reliance on demographic information (detailed clarification is in previous response “4) Motivation for demographic ablation study”), which has a different set of tasks from the zero-shot experiments in the main section, because of the fundamental differences in experimental setups - including demographic data availability across datasets and distinct study objectives for each analysis. Therefore, the evidence leveraged by the reviewer might not directly support the reviewer’s claim made on the zero-shot result, one of the many aspects presented in the paper. **“It outperforms some baseline does not inherently mean it is a significant result for the community.”** The significance lies not in the absolute performance, but in establishing the first empirical evidence that zero-shot inference is possible for wearable foundation models—a critical capability the field currently lacks (ref. [1-2] in reviewer xtPQ's original review, [1-3] in response to reviewer eZc9). While challenging, this direction is fundamental to realizing label-efficient wearable AI. Our work provides initial evidence and opens new research avenues in this emerging paradigm. That said, while we acknowledge the importance of having advanced zero-shot performance for direct initial clinical deployment, we would like to highlight that the zero-shot experiment is just one aspect of our work, and it should not overshadow our other core contributions. These include, which are also gratefully acknowledged by the reviewers, (i) the novel modeling strategy accommodating flexible numbers and types of sensor channels, (ii) the robust performance achieved across diverse health applications,, and (iii) the fact that our entire work is open-sourced for the community. We hope this provides a more comprehensive view of our contributions and addresses the concerns raised. **“The zero-shot results are very far from being clinically useful.”** Regarding the zero-shot results, our primary intent is to demonstrate the feasibility of zero-shot learning in the wearable signal domain—a novel capability not explored in prior work, rather than claiming immediate clinical applicability. For real-world deployment potential, we highlight two important findings: (1) With only 10% labeled data and a simple logistic regression, NormWear achieves >25% improvement, and (2) As shown in our response to Reviewer BfFv, the model shows quick task-specific adaptation capability through fine-tuning. These results suggest promising directions for future research toward practical applications. We hope our responses could better address key concerns and further clarify the significance of our work. We would be honored by the reviewer's reconsideration based on the improvements made. Thank you. [1] Chen, X., Xie, S., & He, K. (2021). *An Empirical Study of Training Self-Supervised Vision Transformers*. arXiv preprint arXiv:2104.02057.
Summary: This paper proposes NORMWEAR, a foundation model designed to process multichannel wearable physiological signals. NORMWEAR is engineered to integrally handle a variety of physiological signals, including EEG, ECG, PPG, GSR, and IMU, and learns generalized representations from diverse sensor data. Its key contributions include signal tokenization using multiscale continuous wavelet transforms, a channel-aware attention mechanism tailored for multichannel data, and zero-shot inference capabilities through alignment with text embeddings. Performance evaluations conducted across 18 diverse healthcare applications using 11 publicly available datasets demonstrate that NORMWEAR outperforms state-of-the-art models. Claims And Evidence: * Claim: NORMWEAR exhibits generalized performance across diverse sensor configurations and delivers superior performance compared to existing models. * Evidence: Across 18 publicly available datasets, NORMWEAR demonstrated an average performance improvement of more than 3.9% over existing methods. Notably, it also outperformed other models in zero-shot learning scenarios. Methods And Evaluation Criteria: * CWT-Based Tokenization: To efficiently process frequency and temporal information from diverse sensor signals, they employed CWT to transform signals into multilayered tokens across different scales. This approach enables the unified formatting of physiological signals with varying characteristics for input. * Downstream Task Performance: The performance of the pre-trained model was evaluated on various sensor datasets. Specifically, its effectiveness in processing diverse physiological signals was validated using metrics such as AUC ROC and Accuracy. * Ablation Study: Experiments were conducted to compare the model with and without CWT-based tokenization, analyzing the impact of CWT on performance. * **Review**: CWT is widely used as a method to preserve information across various frequency bands and time domains. In practice, models employing CWT demonstrated superior performance, and experimental results confirmed that it enables more generalized tokenization compared to traditional fixed-size window approaches. Thus, the evaluation metrics used are valid, and the experimental outcomes provide sufficient support for these findings * Channel-Aware Attention: To handle heterogeneous data with varying channel counts, NORMWEAR introduces a channel-aware attention mechanism that efficiently integrates features from each channel. * Ablation Study: The performance was evaluated by comparing the model with and without channel-aware attention, particularly on datasets with heterogeneous and diverse sensor channel configurations. * Downstream Task Performance: The effectiveness of channel-aware attention was confirmed through metrics like Accuracy and AUC ROC on datasets involving multiple combined sensors. * **Review**: Channel-aware attention improved performance by integrating heterogeneous sensor data, with [CLS]-Attention showing the best results, proving its effectiveness. The clear design and consistent outcomes validate the evaluation metrics * MSiTF-Based Alignment: To enable zero-shot learning, NORMWEAR aligns signal embeddings with text embeddings using the MSiTF module. * Zero-shot Learning Performance: The zero-shot performance of the pre-trained model was assessed on unseen datasets without fine-tuning, with AUC ROC as the primary metric. * Ablation Study: The contribution of MSiTF was validated by analyzing performance degradation when the Importance Score was removed from the MSiTF module. * Comparison with Baseline: Experiments demonstrated that NORMWEAR achieves higher zero-shot performance compared to the CLAP model. * **Review**: Experiments demonstrated that the MSiTF module enhanced zero-shot performance, with performance drops when removing importance scores or text data augmentation, proving each component's necessity. The experimental design and evaluation metrics are valid, supported by the results Theoretical Claims: The paper focused on experimental results, not mathematical proofs. Experimental Designs Or Analyses: * Comparing it with EEG-only foundation models (Labram [1], BIOT[2], and CBraMod[3]) could have clarified the proposed model's strengths and weaknesses more effectively. * As a foundation model, leveraging large datasets should be an advantage, but it’s somewhat disappointing that it uses less data compared to EEG only foundation models. (27k[3] vs 15k[proposed]) [1] Jiang, Wei-Bang, Li-Ming Zhao, and Bao-Liang Lu. "Large brain model for learning generic representations with tremendous EEG data in BCI." arXiv preprint arXiv:2405.18765 (2024). [2] Yang, Chaoqi, M. Westover, and Jimeng Sun. "Biot: Biosignal transformer for cross-data learning in the wild." Advances in Neural Information Processing Systems 36 (2023): 78240-78260. [3] Wang, Jiquan, et al. "CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding." arXiv preprint arXiv:2412.07236 (2024). Supplementary Material: No Relation To Broader Scientific Literature: Compared to existing time-series models (TF-C, Chronos) or spectrum-based models (CLAP), NORMWEAR overcomes limitations in multichannel, multimodal signal processing. It appears more generalized than prior studies. Essential References Not Discussed: It would be beneficial to discuss the advantages and disadvantages compared to single-signal type foundation models (e.g., EEG-only foundation models). Other Strengths And Weaknesses: # Advantages * Extensively validates model performance across diverse datasets and real-world applications, demonstrating high scalability. * Enables application to real-time data through zero-shot learning. # Disadvantages: * As a foundation model, it should leverage large datasets, but uses less data compared to existing EEG foundation models. * Lacks comparative experiments with single-signal (e.g., EEG-only, ECG-only) foundation models. Other Comments Or Suggestions: No Questions For Authors: * Can you provide a specific analysis of the computational complexity and efficiency of the proposed MSiTF? How do you assess its feasibility for implementation in real-time wearable systems? * In the case of EEG, the characteristics of brain signals can vary significantly depending on channel positions. How does NORMWEAR handle this positional information? If EEG signals are collected only from specific positions, is there a risk that the model’s feature recognition capability might degrade? I’m curious if there are any solutions or future plans to address this. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful feedback. We appreciate that you found our proposed method to be generalizable and scalable, and that you recognized the thoughtful design of our NORMWEAR model, as well as its solid performance across diverse healthcare applications. Below, we briefly respond to each of your concerns. If, after reading our responses, you feel there are areas where our work can be further improved, we would greatly value your additional feedback to help us refine and strengthen our contributions. ## 1) Complexity analysis of MSiTF Here we provide a brief overview of the complexity analysis of the MSiTF, and we added detailed analysis in the appendix. - Runtime complexity, with d being the latent size, p being the number of total patches, c being the number of available ground truth choice: - Linear mapping: $O(d^2)$ - Relevance scoring: $O(pd)$ - Inference scoring: $O(cd)$ - Total: $O(d(d+p+c))$ - Since d is constant, we have runtime complexity of $O(p+c)$. - Memory complexity, with m being the size of text encoder, w being the size of normwear: - Signal representations: $O(pd)$ - Text representations: $O(cd)$ - Total: $O(m+w+d(p+c))$ - Since m, w, and d are all constants, we have memory complexity of $O(p+c)$. While signal and text encoders require cloud offloading, MSiTF's low-latency inference and minimal memory overhead seems suitable for real-time wearable deployment. We agree that a more systematic evaluation is needed in future work, including real-world runtime assessment on wearable hardware under varying resource constraints, as well as ablation studies on optimization techniques (e.g., quantization) to balance performance and efficiency. ## 2) Question on varying EEG positions NormWear applies intra-channel position encoding within each signal channel, ensuring that as long as the input order remains consistent, channel rearrangement does not impact high-level features. Our experiments confirm minimal performance variation (≈0.02) when shuffling EEG channels [cf. “Technical clarifications” in response to reviewer 8Rhq]. However, we recognize that if EEG data is consistently collected from limited positions, certain spatial features might be underrepresented. We will include in the future work section the possibility of developing EEG-specific models that learn inter-channel position-aware embeddings to enhance adaptability. ## 3) Baseline Comparison with single modality foundation models We agree with the reviewer that comparison with signal specific foundation models would better demonstrate the position of NormWear in the recent literature: |Datasets|Chosen Model|Signal Specific|NormWear| |--------|------------|----------------|--------| |WESAD|PaPaGei [1]|56.656|76.06| |Driver Fatigue|CBraMod [2]|80.43|74.292| |State Recognition|-|68.543|**75.176**| |GAMEEMO|CBraMod|55.42|54.937| |Epilepsy, eye_open|CBraMod|90.436|92.743| |Epilepsy, eye_relax|CBraMod|95.552|94.828| |Epilepsy, health|CBraMod|88.065|88.541| |Epilepsy, tumor|CBraMod|87.258|87.197| |Epilepsy, seizure|CBraMod|94.616|97.053| |EEG Task|-|85.225|**85.883**| |Blood Pressure Estimate from PPG|PaPaGei|90.596|92.42| |Hemoglobin Estimate from PPG|PaPaGei|94.912|94.632| |Vital Sign|-|92.754|**93.526**| |Hypertension Detect, PPG|PaPaGei|61.839|62.341| |Diabetes Detect, PPG|PaPaGei|55.668|55.893| |Brain Stroke, PPG|PaPaGei|73.125|70.625| |Brain Disease, PPG|PaPaGei|49.066|51.773| |Heartbeat Abnormal Detection, ECG|ECG-FM [3]|89.898|99.14| |Disease Risk|-|65.919|**67.954**| |Micro Average|-|77.569|**79.498**| |Macro Average|-|78.110|**80.635**| ## 4) Discussion of comparing with single-modal models Thank you for the suggestion of including this aspect in the discussion. Here is the brief overview of our discussion: “NormWear's main benefit is that it capture cross-modal relationships, making it more versatile for wearable health tasks. While it sacrifices modality-specific optimization for adaptability, this may slightly reduce performance in highly specialized tasks. Single-signal models excel in their domains due to deeper modality-focused training. Instead of maximizing single-modality data, we prioritize signal diversity for better generalization. Benchmarking shows that NormWear, trained on a smaller dataset than EEG-only models, still achieves competitive results, highlighting the effectiveness of our pre-training approach.” We acknowledge that dataset scale is an important factor for future improvement and will refine the discussion further in the manuscript. We have incorporated the suggested revisions and all the suggested citations accordingly. ## Reference [1] A. Pillai, D. Spathis, F. Kawsar, and M. Malekzadeh, ‘PaPaGei: Open Foundation Models for Optical Physiological Signals’. [2] Wang, Jiquan, et al. "CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding." [3] K. McKeen, L. Oliva, S. Masood, A. Toma, B. Rubin, and B. Wang, ‘ECG-FM: An Open Electrocardiogram Foundation Model’.
Summary: This paper introduces NormWear, a foundation model for wearable physiological signals that leverages a Vision Transformer-based architecture. It processes multi-variate signals from wearable sensors by transforming each variate into an image representation via Continuous Wavelet Transform (CWT). To enable zero-shot inference, they propose Memory Stream-inspired Temporal Fusion (MSiTF), which aligns wearable signals with the text modality. Experimental results show that NormWear outperforms baseline models in linear probing performance on average and achieves superior zero-shot inference performance. Claims And Evidence: - While the model is trained on a diverse dataset and demonstrates good performance, foundation models in fields like NLP or vision typically require much larger and more diverse pretraining data. The authors may justify its generalization ability through more extensive results, including comparisons with more diverse baseline models. And the zero-shot performance appears to be not very impressive. - The details on how the signal data align with textual embeddings could be elaborated further. Additional analysis (e.g., visualizing latent alignment between signals and text) would help clarify the cross-modal relationships captured by the model. Methods And Evaluation Criteria: The proposed approach of handling multi-modal or multi-channel times-series data by tokenizing signals, that is, by converting each individual signal into an image through CWT, seems novel, interesting, and reasonable. Some baseline models such as CLAP or Chronos seem to utilize single-variant inputs, so more details on how multi-variate inputs were handled by the baseline models should be given. Theoretical Claims: The manuscript does not introduce formal theoretical derivations requiring rigorous proofs. Instead, it presents methodological components and validates them through empirical evaluations. Experimental Designs Or Analyses: - The baseline models, pretraining datasets, and evaluation metrics used in the experiments are valid and appropriate. - The paper does not clearly specify whether each baseline model is unimodal or multimodal, or which modalities were used for training in each downstream task. - Incorporating time-series Transformer models as additional baselines would help determine whether NormWear can effectively replace existing models in real-world applications. Evaluating against state-of-the-art Transformer-based architectures for time-series analysis would be necessary. Supplementary Material: I've checked the Appendix including details on implementation and hyperparameters, data augmentation and preprocessing, and ablation studies, and feature visualization. Relation To Broader Scientific Literature: This study expands the concept of large-scale time-series foundation models to a heterogeneous, multichannel, and multimodal wearable sensor environment, unlike previous studies that typically focus on single-modality or task-specific approaches Essential References Not Discussed: The authors may consider including the following recent papers on wearable sensing and time-series foundation models. - Narayanswamy, Girish, et al. "Scaling Wearable Foundation Models." ICLR 2025 - Abbaspourazad, Salar, et al. "Large-scale Training of Foundation Models for Wearable Biosignals." ICLR 2024 Other Strengths And Weaknesses: - An AUROC of approximately 60% in zero-shot inference may be too low for practical real-world applications as a foundation model. Additional experimental results on more diverse scenarios could better demonstrate its generalization ability. - The explanation of the technical components is rather descriptive, making it difficult to replicate the results. While the code is provided as supplementary material, additional details on the methodology and evaluation framework would improve clarity and reproducibility. Other Comments Or Suggestions: - I'm curious why Manhattan distance and cosine similarity were used as loss functions instead of contrastive loss, which is more commonly employed in alignment-based learning. More justification for this choice would be helpful. - The paper appears to describe [CLS] tokens as modality-specific, but in the provided code, a single [CLS] token seems to be used across modalities. Questions For Authors: - While the proposed method can handle any number of modalities or channels, this advantage is not well highlighted in the experimental results. Could you provide additional analysis on this aspect? - It seems that positional encoding was not explicitly applied to multi-variate patches. Since patch ordering could affect performance, it would be interesting to examine whether the order of variate patches influences the model’s performance. - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive feedback. We appreciate that you found our work to be novel and reasonable. Below, we briefly respond to each of your concerns. ## 1) Uni-modal baselines We acknowledge that Section 4.1 on baseline selection could be clearer, and we will revise the manuscript to improve this. Specifically, for uni-modal baselines like Chronos and CLAP, we process each signal separately and concatenate their representations after the forward pass. This ensures that all models have the same field of view, making the comparison fair. Several baselines, including Chronos, CLAP, and TF-C, indeed employ transformer-based architectures. In addition, to better highlight the aspect that our proposed model could handle flexible settings of sensors as the reviewer suggested, we compare NormWear with several sensor signal specific foundation models, as presented in [cf. “Baseline Comparison with single modality foundation models” in response to Reviewer eZc9]. ## 2) Loss of modality alignment We acknowledge prior work using contrastive loss (e.g., CLAP) to align signal and language. However, in healthcare-related tasks where flexible inference across diverse scenarios is often required, the ground truth labels often have substantial overlap (e.g., depression is inferred from stress levels [1]). Due to these nested relationships, tasks cannot be easily grouped for contrastive learning, which requires clearly defined positive and negative pairs. Despite this, we did experiment with contrastive loss, but as anticipated, it did not converge due to the inherent nature of contrastive optimization. Instead, we train the model to directly learn the estimated kernel that projects signal representations into the semantic space. We will revise the manuscript to better explain the rationale and motivation behind our choice of loss function. ## 3) Technical clarifications We sincerely appreciate the reviewer’s attention to the engineering aspects of our work. We did use a single trainable [CLS] vector. This vector is replicated and appended to each individual signal channel before being passed into the first transformer block. After the first intra-channel encoder, these [CLS] tokens become modality-specific because they have integrated channel-specific information through the intra-channel attention mechanism. In addition, we appreciate the constructive suggestion to ablate the effect of input channel order. We agree that this analysis would strengthen the justification for the channel-agnostic design of Normwear. Specifically, since position encoding is applied independently within each signal channel, as long as the input order remains consistent within a downstream dataset, rearranging signal channels does not affect the output set of high-level features. Thus, we assume the performance shouldn’t vary significantly when shuffling the channel order. To validate this, we randomly shuffled the channels on the downstream tasks that have multiple sensor channels, and observe an average absolute difference around 0.01: |Task|Original order|Random shuffle|Diff| |----|--------------|--------------|----| |WESAD (IMU, PPG, ECG, GSR)|0.761|0.763|0.002| |UCI-HAR (IMU)|0.989|0.975|0.014| |Drive Fatigue (EEG)|0.743|0.721|0.021| |GAMEEMO (EEG)|0.549|0.530|0.019| |Noninvasive-BP (PCG, PPG, ECG)|0.924|0.914|0.010| |PPG-HGB (Red, IR)|0.946|0.948|0.002| ## 4) Zero-shot evaluation Regarding the concern about the experimental scale, our experiments indeed span diverse scenarios—including mental health, physical activity, brain activity, and disease risk evaluation. Reviewer eZc9 also acknowledged the sufficiency of our model’s generalization ability (cf. “Extensively validates model performance across diverse datasets and real-world applications, demonstrating high scalability.”) While there remains room for improvement in zero-shot performance, our work represents the first demonstration of zero-shot capability in the wearable signal domain, an aspect not present in recent studies (cf. included the most recent baselines proposed by other reviewers in [link to reviewer eZc9, xtPQ, bfFv]). Despite the highly challenging zero-shot inference setting, Normwear outperforms the baseline by **18.34%**. Moreover, when provided with only 10% of labeled data and training only a linear classifier head, Normwear achieves a significant improvement of more than 25% [cf. "Additional Baselines", reviewer xtPQ]. We would also like to emphasize that our contribution lies in offering a new perspective that can help drive progress in this direction within the field. We have incorporated the suggested revisions and all the suggested citations accordingly. If you feel there are areas where our work can be further improved, we would greatly value your additional feedback. ## Reference: [1]: LeMoult, Joelle. "From stress to depression: Bringing together cognitive and biological science." Current Directions in Psychological Science 2020
null
null
null
null
null
null
Human Body Restoration with One-Step Diffusion Model and A New Benchmark
Accept (poster)
Summary: This paper introduces a high-speed diffusion model that can restore low-quality human body images in just one diffusion timestep. The paper presents a high-quality dataset, PERSONA, which includes diverse human body images. Additionally, the proposed OSDHuman model paves the way for incorporating visual priors into diffusion models for human body restoration. OSDHuman outperforms current state-of-the-art methods in terms of both quality and efficiency on benchmark datasets. Claims And Evidence: The contributions of this paper are primarily divided into two parts: the PERSONA dataset and the OSDHuman model. Regarding the dataset, the authors propose a High Quality Human Dataset Pipeline, which uses label filtering, object detection, blur awareness, and IQA filtering to create a dataset of 109,052 high-quality images. The process for constructing this dataset is reasonable, and it would be beneficial for the authors to make it publicly available to contribute to the computer vision and machine learning communities. Regarding the model OSDHuman, the authors introduce a high-fidelity image embedder (HFIE) and use VSD regularization as guidance. The effectiveness of this approach is demonstrated through Ablation Studies in Table 4. Experimental results demonstrate it perform well in both visual quality and quantitative metrics. Methods And Evaluation Criteria: Pros: The proposed methods and evaluation criteria are meaningful. The PERSONA dataset fills the gap of lacking publicly available high-resolution open-source datasets for human body images. The proposed model also addresses the gap in portrait photography restoration. Cons: However, the images restored from OSDHuman have some color shift. For example, in Figures 5 and 6 in the supplementary materials, the teeth of the person in the 2nd and 5th sets of images are noticeably whiter. Theoretical Claims: The formulas appear to be correct, with no obvious issues. Experimental Designs Or Analyses: Pros: In the comparative experiments, the authors retrained the SinSR and OSEDiff models on the PERSONA dataset. The results in Table 2 show that after retraining with PERSONA-train data, the models achieve better performance in human body restoration. Cons: From the visual images, it can be seen that the LQ images in PERSONA-Val differ in noise compared to the LQ images in PERSONA-Test. It appears that the Val dataset contains much more severe Gaussian noise than real-world situations. Although the authors mention that the LQ images in the Val dataset are generated using the same degradation pipeline as the training data, could it be more realistic? Supplementary Material: Yes, I reviewed the supplementary material. In Section A, the authors state that OSDHuman can infer a 512x512 image in just 0.11 seconds on an A6000 GPU. Additionally, the visual analysis in Figures 1 and 2 of the supplementary materials highlights the advantages of HFIE in handling low-quality images. Relation To Broader Scientific Literature: Human body image restoration has many applications in photography, especially in mobile photography. However, most previous image restoration research has focused on either natural images or faces, such as StableSR[1], SUPIR[2], SinSR[3], DiffBIR[4], and OSEDiff[5]. These models may not perform well for tasks specific on human body images. As for datasets targeting human bodies, most existing datasets are for fashion purposes [6-7], and there has been a lack of high-resolution portrait datasets for real-world scenes. The dataset proposed in this paper is significant for training more specialized portrait photography image restoration models. [1] Exploiting Diffusion Prior for Real-World Image Super-Resolution [2] Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild [3] SinSR: Diffusion-Based Image Super-Resolution in a Single Step [4] DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [5] One-Step Effective Diffusion Network for Real-World Image Super-Resolution [6] Large-scale Fashion (DeepFashion) Database [7] SCAM! Transferring humans between images with Semantic Cross Attention Modulation Essential References Not Discussed: The related works cited in the paper are comprehensive. Other Strengths And Weaknesses: Pros: 1. The dataset covers many categories, as shown in Figure 7. It is of high quality and rich in diversity. 2. The efficiency of human body restoration is crucial for applications. The proposed method addresses a practical problem with an efficient solution and favorable qualitative results. 3. Unlike image-to-tag models that generate tags as textual prompts, HFIE directly tokenizes each image, eliminating information loss during the image-tag-embedding process, which can improve fidelity, as mentioned in the supplementary material. 4. The paper is clearly written, easy to follow, and presents a well-motivated and reasonable argument. Cons: 1. The datasets presented in the article consist of 512x512 size images. Since some of the images do not have the person occupying a large area of the image, does this mean the resolution of these images is too small for the human body? For example, in Figure 6, most individuals occupy less than one-third of the image. Does this imply that the dataset is still not high definition enough? 2. Regarding the model, how scalable is it for images with larger resolutions? Other Comments Or Suggestions: 1. In Table 2, the DISTS column shows that OSEDiff\* performs better than OSEDiff. OSEDiff\* should be cyan. 2. In 2nd row of line 240, `1024` should be `1,024`. Questions For Authors: 1. From Table 2 and Table 4, it seems that using HFIE as a Prompt Extractor does not result in better performance for the MANIQA metric. Could the authors provide an explanation? 2. The authors froze the timestep of the OSD model to 999. I wonder whether this parameter has any effect on the results. Could the authors clarify this? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: `Q4-1:` However, the images restored from OSDHuman have some color shift. For example, in Figs. 5 and 6 of the supplementary materials, the teeth of the person in the 2nd and 5th sets of images are noticeably whiter. `A4-1:` Thank you for pointing out the color shift issue. We address this by applying wavelet-based color correction [1], which combines the high-frequency details from the restored image with the low-frequency color components from the original input. This helps align color distributions while preserving structural details. We plan to integrate this correction into the model for end-to-end learning of color consistency in future work. Your feedback is greatly appreciated. [1] Mallat, Stephane G. "A theory for multiresolution signal decomposition: the wavelet representation." TPAMI 1989. --- `Q4-2:` The LQ images in PERSONA-Val seem to have stronger Gaussian noise than those in PERSONA-Test. Could the degradation be made more realistic? `A4-2:` In real-world scenarios like group photos, surveillance footage, or chat images, factors such as long shooting distance, compression, and low-end devices often lead to severe degradation. Our pipeline is designed to simulate such challenging cases, which may result in stronger noise than typical natural degradations. Thank you for the suggestion. We will explore refining the pipeline by incorporating more realistic artifacts like motion blur in future work. --- `Q4-3:` Some individuals in the dataset images occupy less than one-third of the 512×512 frame. Does this mean the resolution is too small for effective human body restoration? `A4-3:` Thanks for raising this concern. Human body restoration is mainly used in portrait photography. In such cases, people care more about the harmony between the person and the background, not just the body or the scene alone. In everyday mobile photography, the human subject does not always take up a large portion of the image. Instead, good composition and visual balance are more important. The PERSONA dataset uses square 512×512 images. This format works well for various poses, such as standing, sitting, crouching, or group interactions. It also helps blend the human subject with the background, requiring the model to restore both in a consistent and natural way. This is why we chose this design for the dataset. --- `Q4-4:` Regarding the model, how scalable is it for images with larger resolutions? `A4-4:` Thank you for your thoughtful questions. Our model scales well to high-resolution images using a tiled inference strategy. For example, we processed a 3472×4800 image by setting the VAE encoder tile size to 1024×1024 (32px overlap), latent tile size to 96×96 (32px overlap), and decoder tile size to 224×224. Inference ran on an A6000 GPU with a peak memory usage of 32 GB. You can visually inspect the detailed results using the **[anonymous link](https://imgsli.com/MzY0NzIw)**. --- `Q4-5:` - In Table 2, the DISTS column shows that OSEDiff* performs better than OSEDiff. OSEDiff* should be marked in cyan. - In the second row of line 240, `1024` should be formatted as `1,024`. `A4-5:` Thank you for pointing these out. We will make the necessary corrections. --- `Q4-6:` From Table 2 and Table 4, it seems that using HFIE as a Prompt Extractor does not result in better performance for the MANIQA metric. Could the authors provide an explanation? `A4-6:` Thank you for the observation. MANIQA is a no-reference IQA metric sensitive to the dataset it was trained on. In our paper, we used the PIPAL-trained version, which yielded lower scores for HFIE in Table 4. However, when using the MANIQA model trained on the KonIQ dataset, HFIE achieved the best performance in our ablation study. The results are shown below: |Type|From HQ|From LQ|MANIQA-PIPAL↑|MANIQA-KonIQ↑|Average↑| |-|-|-|-|-|-| |Null| | |**0.7226**|0.4430|0.5828| |DAPE|✔| |0.7014|0.4309|0.5662| |HFIE|✔| |0.6747|0.4718|0.5733| |HFIE| |✔|0.6977|**0.4829**|**0.5903**| This highlights how dataset bias affects MANIQA's judgment. When averaging across both metrics, HFIE performs best, further confirming its effectiveness for human body restoration. --- `Q4-7:` The authors froze the timestep of the OSD model to 999. I wonder whether this parameter has any effect on the results. Could the authors clarify this? `A4-7:` Thank you for your question. In one-step diffusion models, the timestep mainly determines the initial noise level. Since the model performs only a single denoising step, this parameter has limited impact on performance. Fixing it (e.g., to 999) is a common practice, and the model is then fine-tuned to adapt to this noise level. --- Rebuttal Comment 1.1: Comment: I appreciate the answers given and change my score to 5. --- Reply to Comment 1.1.1: Comment: Dear Reviewer QwNC, Thank you for your response. We are pleased to learn that our answers addressed your concerns and appreciate your updated score. Best regards, Authors
Summary: The paper proposes a novel approach to human body restoration (HBR) by introducing OSDHuman, a one-step diffusion (OSD) model, and a new benchmark dataset named PERSONA. The authors develop a high-quality dataset automated cropping and filtering (HQ-ACF) pipeline to create PERSONA, which comprises 109,052 high-resolution (512×512) human images covering diverse natural activities and complex interactions. This dataset outperforms existing human-related datasets in quality and richness, addressing the lack of task-specific benchmarks for HBR. OSDHuman incorporates a high-fidelity image embedder (HFIE) to extract precise prompts from low-quality (LQ) images, avoiding misleading guidance, and employs a variational score distillation (VSD) regularizer to align generated outputs with natural image distributions. Claims And Evidence: Almost correct. Methods And Evaluation Criteria: Almost correct. Theoretical Claims: Yes Experimental Designs Or Analyses: Please refer to weaknesses. Supplementary Material: Yes Relation To Broader Scientific Literature: The key contributions of the paper relate to the broader scientific literature by addressing the lack of HBR-specific benchmarks, building on diffusion model advancements, and improving one-step restoration efficiency. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The introduction of the PERSONA dataset addresses a critical gap in human body restoration (HBR) research by providing a high-quality, diverse benchmark with 109,052 images, surpassing existing human-related datasets in quality and richness. 2. The paper provides a well-structured experimental evaluation with clear visual comparisons (e.g., Figure 9) and quantitative results (e.g., Tables 2–4), effectively showcasing OSDHuman’s superior visual quality and metric performance on the PERSONA dataset. Weaknesses: 1. The paper’s innovation is relatively modest, as OSDHuman and the PERSONA dataset build incrementally on existing one-step diffusion techniques and dataset curation methods, offering no transformative advancements in the field of diffusion-based image restoration. 2. The assumption that a single denoising step can effectively restore complex human images lacks rigorous justification, with no formal analysis of the HFIE’s attention mechanism or its convergence properties under severe degradation, potentially impacting the model’s reliability. 3. The article lacks more theoretical analysis of the one-step strategy. Is such a one-step strategy better than multiple steps? Can more theoretical proof be provided? 4. In addition, from a visual comparison, the visual effect of OSEDiff is obviously better than the proposed method, especially it looks more natural. Why is this? 5. While OSDHuman’s performance claims are supported by experimental results, the lack of ablation studies on varied degradation types (e.g., motion blur, compression artifacts) weakens the evidence for the high-fidelity image embedder (HFIE)’s effectiveness. Other Comments Or Suggestions: In general, from the perspective of the contribution of the dataset, I think this article is valuable. However, from the perspective of the method, I think it lacks contribution and novelty, as well as detailed theoretical basis. But overall, I think it still has some contribution, so I give it a weak accept now, and I will adjust my score based on the rebuttal. Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: `Q3-1:` The paper's innovation seems limited, as both the model and dataset build on existing work. `A3-1:` Thanks for raising this concern. OSDHuman is the first one-step diffusion model applied to human body image restoration. Compared to traditional multi-step models, it achieves faster inference and lower computational cost, making it practical for real-world scenarios such as group photos or compressed chat images. To reduce bias introduced by external guidance modules, we introduce HFIE, which enables end-to-end training with lightweight and unbiased prompts. For more details on the novelty of HFIE, please refer to `A1-1`. The PERSONA dataset is also the first large-scale benchmark for this task, addressing prior limitations such as single-person bias, fixed poses, and narrow aspect ratios. It covers diverse poses, interactions, and real-world scenarios and will be open-sourced to support the community. --- `Q3-2:` The assumption that one denoising step can effectively restore complex human images lacks rigorous theoretical justification, and there is no formal analysis of the HFIE’s attention mechanism or its convergence properties under severe degradation. `A3-2:` Thanks for raising this concern. One-step diffusion has been widely explored in image generation and restoration. Recent works [1, 2] distill multi-step models into one step, while methods like SinSR and OSEDiff use one-step strategies for super-resolution. Since body images share similar complexity with natural images, applying one-step diffusion is theoretically reasonable. The attention mechanism of HFIE can be understood as follows: It encodes 145 embeddings into 77 vectors required by Stable Diffusion 2.1. A learnable query attends to these embeddings via softmax, producing a weighted sum that preserves both local and global information. As a convex combination in the original feature space, this ensures stable and effective guidance during training. The low-quality images from our degradation pipeline are heavily degraded. OSDHuman with HFIE trains stably on them and achieves good convergence. Compared to DAPE, HFIE leads to faster loss reduction: |Step|L2 Loss (DAPE → HFIE)|LPIPS (DAPE → HFIE)| |-|-|-| |10k|0.062 → 0.043|0.843 → 0.771| |20k|0.049 → 0.042|0.771 → 0.757| |30k|0.047 → 0.041|0.759 → 0.750| |40k|0.047 → 0.042|0.753 → 0.753| Loss visualizations are available at the [anonymous GitHub link](https://anonymous.4open.science/r/Submission_Number-2750-3BA1). [1] Liu et al., Flow straight and fast: Learning to generate and transfer data with rectified flow, ICLR, 2023. [2] Yin et al., One-step diffusion with distribution matching distillation, CVPR, 2023. --- `Q3-3:` The article lacks more theoretical analysis of the one-step strategy. Is such a one-step strategy better than multiple steps? Can more theoretical proof be provided? `A3-3:` Thanks for your feedback. One-step diffusion offers a practical trade-off between performance and efficiency, achieving results comparable to multi-step models with much lower latency (see `A2-1`). This is made possible by strong base models with good generalization, which are well-suited for tasks like human body restoration. As for theoretical reasoning, our work focuses more on introducing a new benchmark and model design rather than formal theoretical proof. We are also eager to see further theoretical development on one-step diffusion models, which would benefit the machine learning community. --- `Q3-4:` In addition, from a visual comparison, the visual effect of OSEDiff is obviously better than the proposed method, especially it looks more natural. Why is this? `A3-4:` Thanks for your questions. In rare cases, our results may look less natural due to specific degradations, sometimes causing color shifts. This can be mitigated via post-processing like wavelet-based correction [3], as described in more detail in `A4-1`. We will continue to refine our model to improve visual consistency, especially in challenging scenarios. Overall, our method outperforms OSEDiff in preserving fine facial details and natural tones. For example, in Fig. 2, expressions like subtle smiles are better retained, while OSEDiff may distort them and introduce unnatural reddish hues, indicating weaker perceptual consistency. [3] Mallat, Stephane G., A theory for multiresolution signal decomposition: the wavelet representation, TPAMI, 1989. --- `Q3-5:` The lack of ablations on varied degradations (e.g., motion blur, compression) weakens the evidence for HFIE’s effectiveness. `A3-5:` The degradation model we used, Real-ESRGAN, includes several common degradation types, such as downsampling, noise, blur, and JPEG compression. We appreciate your suggestions, and we plan to explore additional degradation types to guide the model to perform better in more natural scenarios. We will experiment with motion blur and continue to explore the model’s effectiveness across a wider range of scenarios.
Summary: This paper presents a dataset automated cropping and filtering pipeline and constructs a person-based restoration with sophisticated objects and natural activities dataset. A novel one-step diffusion model is proposed for human restoration. Experimental results demonstrate the effectiveness. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical proofs involved. Experimental Designs Or Analyses: The experimental design and analysis is sound. Supplementary Material: Supplementary materials are not included. Relation To Broader Scientific Literature: Related to the diffusion models and blind image restoration literature. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: Pros: 1. This work is well organized and well written. 2. A new dataset is proposed which is of research value. 3. Experimental presentations are extensive. Cons: 1. It is necessary to show the inference speed of different algorithms. This is the reason why one-step diffusion models are used. 2. In ablation experiments, the proposed components did not always improve on all metrics. It is recommended to add visual comparisons to show the effectiveness of the proposed components. 3. The aim of this paper is to present the dataset and the corresponding methodology for human body restoration. However, the comparison of the visualizations in Figures 8, 9 is still focusing on the face region and the gaps in other human body regions are not significant. I am concerned about the value of this study and how it differs from face restoration. Other Comments Or Suggestions: 1. Comparisons with non-diffusion restoration models could be added. Questions For Authors: Please see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: `Q2-1`: It is necessary to show the inference speed of different algorithms. This is the reason why one-step diffusion models are used. `A2-1:` Thank you for your thoughtful suggestion. We have provided a detailed comparison of inference speed, parameter count, and computational cost for several recent diffusion models in the supplementary materials. For your convenience, the table below presents the same results: | Methods| DiffBIR | SeeSR | PASD | ResShift | SinSR | OSEDiff | **OSDHuman (Ours)** | |----|---------|-------|------|----------|-------|---------|-------| | Step | 50 | 50 | 20 | 15 | 1 | 1 | 1 | | Time (s) ↓| 9.03 | 5.05| 3.15 | 2.88| 0.19| 0.13| **0.11** | | Param (M) ↓| 1717 | 2524 | 1900 | 119 | 119 | 1775 | 1576 | | MACs (G) ↓| 24234 | 65857 | 29125 | 5491 | 2649 | 2265 | **2200** | --- `Q2-2:` In ablation experiments, the proposed components did not always improve on all metrics. It is recommended to add visual comparisons to show the effectiveness of the proposed components. `A2-2:` Thank you for your suggestion. Regarding the observation that the proposed component (HFIE) did not always improve on all metrics in the ablation study (Table 4), we have discussed this in detail in our response to `Q4-7`. We believe that HFIE is not inferior to other methods under the MANIQA metric. We appreciate your suggestion and will include additional visual comparisons to illustrate the effectiveness of each component. You can view the visual comparisons at the [anonymous GitHub link](https://anonymous.4open.science/r/Submission_Number-2750-3BA1). --- `Q2-3:` This paper aims to present the dataset and the corresponding methodology for human body restoration. However, the comparison of the visualizations in Figs. 8 and 9 is still focusing on the face region and the gaps in other human body regions are not significant. I am concerned about the value of this study and how it differs from face restoration. `A2-3:` Thanks for raising this concern. Firstly, the reason we focus on faces in Figs. 8 and 9 emphasizes our method's effectiveness in restoring small-scale facial details. Since humans are particularly sensitive to facial perception, improvements in facial regions are noteworthy. Secondly, our method does not solely focus on faces; it also achieves significant restoration results for other body regions and background areas. Additional visual comparisons demonstrating these broader improvements can be found in the supplementary materials. --- `Q2-4:` Comparisons with non-diffusion restoration models could be added. `A2-4:` Thank you for your suggestion. We will add additional comparisons with classic non-diffusion restoration models. The test set comparison results are shown in the table below: | Methods| CLIPIQA↑| MANIQA↑| MUSIQ↑| NIQE↓| |---|----|---|---|---| | Real-ESRGAN[1]| 0.4721 | 0.6159| 67.7610 | 4.4390 | | BSRGAN[2] | 0.5307 | 0.6159 | 70.8345 | 4.4474 | | SwinIR[3] | 0.4847 | 0.6240 | 69.7636 | **4.0014** | | DAT[4] | 0.3194 | 0.3497 | 27.8132 | 8.3351 | | HAT[5] | 0.3936 | 0.5563 | 52.5818 | 6.4262 | | **OSDHuman (Ours)** | **0.7155** | **0.6977** | **73.7694** | 4.1287 | These results demonstrate the effectiveness of our proposed method compared to existing non-diffusion approaches. [1] Wang et al., Real-ESRGAN: Blind super-resolution with pure synthetic data, ICCV, 2021 [2] Zhang et al., Designing a practical degradation model for deep blind image super-resolution, ICCV, 2021 [3] Liang et al., SwinIR: Image restoration using Swin transformer, ICCVW, 2021 [4] Chen et al., Dual aggregation transformer for image super-resolution, ICCV, 2023 [5] Chen et al., Activating more pixels in image super-resolution transformer, CVPR, 2023
Summary: This study addresses the challenge of human body restoration by introducing a high-quality dataset construction pipeline, HQ-ACF, which automatically crops and filters human images from existing datasets. Using this pipeline, the PERSONA dataset is created, offering superior quality and content richness compared to existing human-related datasets. Additionally, the study proposes OSDHuman, a novel one-step diffusion model for human body restoration. OSDHuman features a high-fidelity image embedder (HFIE) to generate more accurate prompts, reducing the risk of misleading guidance. Experimental results demonstrate that OSDHuman achieves state-of-the-art performance in both visual quality and quantitative metrics. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA, there is not Theoretical Claim. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Some of the key contributions of the paper are from the previous works such as [1]. [1] Wu, R., Sun, L., Ma, Z., and Zhang, L. One-step effective diffusion network for real-world image super-resolution. In NeurIPS, 2024a. Essential References Not Discussed: The authors have cited the related works that are essential to understanding the key contributions. Other Strengths And Weaknesses: #### **Strengths** 1. The effort in constructing a large-scale dataset specifically for human body restoration is commendable. The proposed HQ-ACF pipeline effectively leverages existing datasets to curate high-quality human images, addressing the scarcity of benchmark datasets in this domain. 2. The proposed OSDHuman model achieves state-of-the-art performance on the newly introduced PERSONA dataset, demonstrating its effectiveness in restoring human images with improved visual quality and quantitative metrics. #### **Weaknesses** 1. The novelty of the proposed approach is somewhat limited. The concept of the VSD is directly adapted from a previous work [1], and the HFIE could be seen as an attention-based variant of the DAPE framework. More justification and discussion on the unique contributions of the method would strengthen the paper. 2. The degradation types applied in the dataset are limited and not thoroughly discussed. If the dataset primarily uses blind super-resolution from [2] as its degradation process, it would be more accurate to frame the problem as "human body super-resolution" rather than the broader term "human body restoration." A broader range of degradations would enhance the dataset’s applicability. 3. The superiority of the dataset is claimed based on its improved IQA values, yet the dataset construction process involves discarding images with lower IQA scores. This raises concerns about potential bias in evaluation and weakens the contribution of the HQ-ACF pipeline. A more transparent discussion on dataset selection criteria and its impact on evaluation would be beneficial. 4. The test set is entirely sourced from the VOC dataset, while the training set is compiled from multiple datasets. This discrepancy could lead to biased evaluations, as the test set may not fully represent the diversity of the training data. A more diverse and representative test set would provide a better assessment of model generalization. 5. The evaluation is limited to the newly introduced PERSONA dataset, without testing on existing human restoration or super-resolution datasets. Assessing the model’s performance on established datasets would better demonstrate its generalizability and highlight its advantages over prior methods. [1] Wu, R., Sun, L., Ma, Z., and Zhang, L. One-step effective diffusion network for real-world image super-resolution. In NeurIPS, 2024a. [2] Wang, X., Xie, L., Dong, C., and Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In ICCV, 2021b. Other Comments Or Suggestions: There are some minor issues that need to be addressed: 1. The full name of SOTA is missing in Line 107. 2. The subcaption of Figure 2 should be "SinSR" instead of "Sinsr." 3. Some citation formats should be revised. For example, "(Liu et al., 2021a) represent the texture details of the human body using ..." 4. Mathematical expressions should be consistent: "z_L" is used in Line 226, while "Z_L" appears in Figure 4. 5. LoRA is not cited. Questions For Authors: Please refer to Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: `Q1-1:` The novelty seems limited, as VSD is adapted from prior work and HFIE resembles an attention-based DAPE. `A1-1:` Thank you for your valuable comments. Our model is the first to focus on human body restoration using a one-step diffusion framework. Our model's VSD module follows the OSEDiff design, which builds on DMD [2] to optimize distribution loss in the latent state space. The DAPE, which requires additional training, inevitably introduces errors and missed predictions, as discussed in our supplementary materials. In contrast, our proposed HFIE does not require separate training. Instead, HFIE utilizes the image encoder from the Recognize Anything Model and integrates a trainable multi-head attention layer. This approach improves performance and reduces computational costs by eliminating the need for extra training and tagging heads. [1] Wu et al., One-step effective diffusion network for real-world image super-resolution, NeurIPS, 2024 [2] Yin et al., One-step diffusion with distribution matching distillation, CVPR, 2023 --- `Q1-2:` The dataset uses limited degradation types. If it mainly relies on blind SR from [4], should the task be framed as "super-resolution" rather than "restoration"? `A1-2:` Thanks for raising this concern. The RealESRGAN [4] pipeline models a broad range of realistic degradations, including blur, noise, and JPEG compression, beyond simple downsampling. After applying this pipeline, we resample images to 512×512, preserving diverse artifacts. In addition to synthetic validation data, our test set includes real-world degraded images with motion blur, noise, and compression. Examples are shown in Figs. 5 and 6 of our supplementary materials. [4] Wang et al., Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data, ICCV 2021 --- `Q1-3:` Since low-IQA images were filtered out, does this introduce evaluation bias and weaken the value of the HQ-ACF pipeline? `A1-3:` Thanks for raising this concern. The IQA metrics used in our HQ-ACF pipeline are widely recognized and commonly applied in image restoration tasks, ensuring the high quality of our dataset. To address potential bias, we further evaluated the dataset using additional IQA metrics not involved in the selection and compared it with other human-related datasets. As shown in the table below, PERSONA achieves the best overall quality and the highest RAM++ category diversity, indicating not only consistently high data quality but also richer semantics. |Dataset|BRISQUE[5]↓|HyperIQA[6]↑|TOPIQ_NR[7]↑|LIQE[8]↑|RAM++ Categories↑| |-|-|-|-|-|-| |VOC|21.28|0.608|0.608|4.241|2759| |iDesigner|25.80|0.632|0.647|4.388|1167| |DeepFashion|42.19|0.639|0.650|4.681|2496| |CrowdHuman|20.43|0.531|0.525|3.690|2220| |**PERSONA (Ours)**|**10.38**|**0.652**|**0.661**|**4.878**|**3365**| [5] Mittal et al., Blind/referenceless image spatial quality evaluator, Asilomar Conference on Signals, Systems, and Computers, 2011 [6] Su et al., Blindly assess image quality in the wild guided by a self-adaptive hyper network, CVPR, 2020 [7] Chen et al., TOPIQ: A top-down approach from semantics to distortions for image quality assessment, IEEE TIP, 2024 [8] Zhang et al., Blind image quality assessment via vision-language correspondence: A multitask learning perspective, CVPR, 2023 --- `Q1-4:` Since the test set only uses VOC, does it fully reflect the diversity of the training data? `A1-4:` The VOC dataset contains images that generally exhibit more severe degradations compared to more recent datasets. Thus, we consider it suitable for evaluating the model's restoration capabilities. We appreciate the suggestion and will include more diverse sources in the public release to enhance test set representativeness. --- `Q1-5:` Evaluation is only on PERSONA. Would testing on existing datasets better demonstrate generalizability? `A1-5:` Thank you for your thoughtful questions. Our proposed PERSONA dataset is the first publicly available benchmark specifically designed for human body restoration. Previous human body restoration methods [5, 6] have not provided public access to their test datasets. We hope that the release of the PERSONA dataset and benchmark will facilitate further contributions to the machine learning and computer vision communities. [5] Zhang et al., Diffbody: Human body restoration by imagining with generative diffusion prior, arXiv:2404.03642, 2024 [6] Wang et al., Prior based pyramid residual clique network for human body image super-resolution, Pattern Recognition, 2024 --- `Q1-6:` There are some minor issues that need to be addressed: 1. The full name of SOTA is missing in Line 107. 2. The subcaption of Fig. 2 should be "SinSR" instead of "Sinsr." 3. Some citation formats should be revised. 4. Mathematical expressions should be consistent: "z_L" is used in Line 226, while "Z_L" appears in Fig. 4. 5. LoRA is not cited. `A1-6:` Thank you for pointing out these issues. We'll carefully revise them.
null
null
null
null
null
null
Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization
Accept (poster)
Summary: The manuscript proposes Revolve, a new method that enhances LLM-based optimization by simulating second-order dynamics for self-evolving agents. Existing gradient approximation methods use textual feedback to approximate first-order gradients but are less effective for long-horizon optimization. Revolve addresses this by modeling response evolution over multiple iterations, which captures higher-order refinements for more stable and informed adjustments. This method has been tested on various tasks, it consistently outperforms existing baselines like CoT, TextGrad, and Reflexion across diverse LLM backends from Llama-3.1-8B to GPT-4o. The evaluation also shows that Revolve enhances efficiency that reduces total runtime by up to 50%. By shifting textual optimization from a purely feedback-driven process to a structured, trajectory-aware approach, Revolve enables more generalizable and scalable adaptation in AI agent systems. Claims And Evidence: After thoroughly check the paper, I find that the claims presented in the paper are well-supported by both theoretical justification and empirical validation: Claim 1: Methodology: Revolve enhances LLM-based optimization by simulating second-order effects. Evidence: 1. The mathematical framework in Section 3.4 formalizes how Revolve captures response trajectory dynamics. 2. Section 3.5 clarifies that Revolve does not compute second-order derivatives numerically but instead approximates such effects through structured response tracking. 3. Empirical loss curves in Figure 2 illustrate that Revolve mitigates stagnation and stabilizes optimization, whereas first-order methods exhibit oscillatory behavior. Claim 2: Accuracy: Revolve outperforms existing baselines across multiple tasks. Evidence: Evaluation results in Section 4 support this claim, e.g., a 29.17% performance gain over state-of-the-art baselines in code optimization. Claim 3: Efficiency: Revolve optimizes more efficiently by reducing total runtime. Evidence: Appendix G reports that Revolve achieves faster convergence while maintaining stable performance improvements. Claim 4: Generality: Revolve generalizes effectively across LLMs. Evidence: Comprehensive multi-model evaluations confirm its adaptability, with results detailed in the universality analysis in Section 4.1. Methods And Evaluation Criteria: The methodology is well-structured, and the evaluation criteria are comprehensive and appropriate: 1. The method is tested across various key tasks: solution optimization, prompt optimization, and code optimization, where each requires iterative refinement and long-horizon reasoning. 2. The experiments leverage diverse benchmarks, e.g., BBH, MMLU, and LeetCode Hard, which covers a range of reasoning complexities. 3. Baseline comparisons with CoT, TextGrad, and Momentum-Enhanced method provide a fair assessment across different optimization paradigms. 4. The study also evaluates Revolve across multiple LLM architectures to demonstrate its generalizability beyond specific models. 5. Computational efficiency is analyzed to ensure that performance improvements are not achieved at excessive computational cost. Overall, the experimental setup and evaluation methodology are well-designed, with no major concerns. Theoretical Claims: The derivations in this paper are logically sound and align with empirical results, with no evident inconsistencies. Experimental Designs Or Analyses: Please refer to methods and evaluation criteria part. Supplementary Material: N/A Relation To Broader Scientific Literature: I think this approach would connect to broader trends in test-time adaptation and self-refinement. Beyond improving accuracy, the paper contributes to efficient LLM adaptation, balancing performance with computational efficiency, which is a critical challenge in large-scale AI. Its strong generalization across diverse LLMs and tasks positions Revolve as part of a broader effort to enhance AI adaptability and inference-time optimization. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Originality: The idea that modeling response evolution to simulate second-order optimization effects in textual adaptation is interesting and holds broad significance for both LLM research and practical applications. Besides the methodological contribution, the paper marks a conceptual shift in LLM optimization. It moves beyond purely feedback-driven updates by incorporating structured response tracking, which bridges textual optimization with second-order principles. 2. Clarity: This paper is overall well-structured and easy to follow. The general setup in the section of methodology helps readers to understand the proposed Revolve framework. Additionally, the detailed information on task-specific implementations in the Experiment section, along with supporting information in the Appendix, facilitates implementation and reproducibility. 3. Evaluation: Extensive evaluation across multiple tasks demonstrates Revolve's substantial performance improvement compared to baseline methods across various LLM backbones. 4. Significance: Revolve's design exhibits impressive generality across various tasks. It has great chances to benefit researches in diverse directions. Weaknesses: 1. The authors are recommended to compare Revolve with ProTeGi [1] in prompt optimization for a more thorough comparison. [1] Pryzant, R., Iter, D., Li, J., Lee, Y. T., Zhu, C., & Zeng, M. (2023). Automatic Prompt Optimization with “Gradient Descent” and Beam Search (No. arXiv:2305.03495). arXiv. http://arxiv.org/abs/2305.03495 2. A typo that doesn’t impact clarity too much but ought to be addressed: in page 7, row 353, the results should reference Table 2 instead of Table 5, as Table 5 contains the complete results. Other Comments Or Suggestions: Please refer to weaknesses part. Questions For Authors: Please refer to weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed evaluation of our work, for acknowledging the soundness of our work, the clarity of the writing, the coverage of related literature, and the comprehensiveness of our experiments. We'll address your concerns in our following response. **Q1. New experiments on the ProTeGi baseline.** We thank the reviewer for suggesting a comparison with ProTeGi. We have now included ProTeGi as a baseline in our prompt optimization experiments. To adapt it to our setting, we used GPT-4o to generate high-quality few-shot exemplars for LLaMA 3.1 8B. The updated results are shown below: | Dataset | | | | | Accuracy % | | ---- | ---- |---- |---- |---- |---- | | | CoT | ProTeGi | TextGrad | M-TextGrad | REVOLVE| | Object Counting | 65.0 |68.0 |77.0 |80.0 |83.0 | | GSM8K | 84.6 |84.6 |84.6 |84.6 |84.6 | We observe that while ProTeGi yields slight improvements over standard CoT prompting on Object Counting, our method achieves higher gains. On GSM8K, all methods perform similarly, likely due to task saturation. These results further validate the effectiveness of REVOLVE in improving prompt optimization, particularly in more challenging settings. ___ **Q2. A typo that doesn’t impact clarity too much but ought to be addressed: in page 7, row 353, the results should reference Table 2 instead of Table 5, as Table 5 contains the complete results.** Thank you for pointing this out. We have corrected the reference in the revised manuscript.
Summary: The paper looks at the leveraging beyond the first-order information in textual optimization. Revolve develops a way to keep an account of previous feedback steps, and goes beyond the issue of stagnating when feedback is limited or fluctuates irregularly. The authors evaluate REVOLVE on three tasks: prompt optimization, solution optimization, and code optimization. Revolve seems to converge faster, and outperform vanilla and momentum-based TextGrad. ## update after rebuttal I still support the publication of the paper, and the authors clarified my questions around costs and implementation. Claims And Evidence: The claims in this paper are supported by the experimental results across the three optimization tasks, but there are some limitations in the evidence provided. The paper makes strong claims about escaping local optima, but lacks e.g., qualitative analysis of actual response trajectories that would illustrate this mechanism in action. It would help my understanding quite a bit to see how this mechanism works in action. I may be missing this, but I also do not see how the similarity is computed using an LLM, which seems important. Methods And Evaluation Criteria: The experiments show promising results, i think they are generally adequate and also mostly mirrors the evaluations in TextGrad. There are two claims that could be better off further supported with additional analyses: 1) How do different ways of computing similarity help? How are the design choices made there? 2) While there is a section in appendix with a few paragraphs, the claims around computational benefits would be better off with a more clear analysis. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I checked the experimental design across the three optimization tasks (prompt, solution, and code). The overall structure is sound, comparing REVOLVE against established baselines with appropriate metrics for each domain. However, there is a lack of ablations for the similarity computation and what kinds of second order effects are captured. In the epxeriments, while absolute performance improvements are highlighted, the paper lacks confidence intervals or significance testing. As far as i can tell, all experiments are conducted with one pass, and e.g., 1% improvement in object counting corresponds to getting 1 more question right (if remember the dataset size correctly, correct me if i’m wrong please) Supplementary Material: I looked at the prompts being used and the discussion on the additional computational cost. Relation To Broader Scientific Literature: The paper's approach to tracking response evolution across iterations provides a relatively interesting extension to existing textual optimization methods. The potential downstream applications in automated prompt engineering and self-refining systems could be useful, particularly for industrial applications where optimization efficiency matters. However, the core idea remains a straightforward extension of existing gradient-based textual optimization rather than a fundamental breakthrough. Essential References Not Discussed: I think essential references are mostly discussed, except one concurrent work named HessianGrad: @misc{ zhang2025hessiangrad, title={HessianGrad: Optimizing {AI} Systems with Hessian-Aware Textual Gradients}, author={Peiyan Zhang and Haibo Jin and Leyang Hu and Xinnuo Li and Liying Kang and Man Luo and Yangqiu Song and Haohan Wang}, year={2025}, url={https://openreview.net/forum?id=0hc7iQLhCt} } Other Strengths And Weaknesses: To me the major weakness is a clear discussion of the observed second order effects and the specific ways to incorporate the second order effect (what type of prompt, what type of workflow and LLM, what is the cost of such a call, etc.) Other Comments Or Suggestions: - Questions For Authors: Repeating here for completeness: For the second-order effects, what type of prompt or what type of workflow and LLM did the authors used? What is the additional cost of computing the second order effects? What type of second-order effects did the authors observe in the distribution of feedback? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your valuable time and positive feedback. We'll address your concerns in our following response. **Q1. For the second-order effects, what type of prompt or what type of workflow and LLM did the authors use?** We appreciate the request for clarification on how second-order effects are realized. In REVOLVE, we approximate second-order effects by combining two elements: (1) first-order feedback from the evaluator, and (2) a similarity signal that tracks how the model's responses evolve across iterations. This similarity reflects changes in task performance between responses, scaled by how much the prompt has changed (as described in lines 198–200 of the formula). Rather than computing this explicitly, we guide the LLM with instructions like: _"Consider how the responses to this variable have changed across previous iterations: <PAST_ITERATIONS>{past_values}</PAST_ITERATIONS>, …, Ensure future responses reflect a meaningful, gradual evolution."_. This setup allows the LLM to reason over both current feedback and the broader trajectory of response updates. We hope this clarifies our workflow. ___ **Q2. Qualitative analysis of response trajectories that shows the second order effects. What type of second-order effects did the authors observe in the distribution of feedback?** We appreciate the request for qualitative analysis. We've added a new section with full response and feedback trajectories to illustrate the differences. **Response Trajectories:** On the Object Counting task, TextGrad repeats the same prompt across the final 8 iterations: _"You will answer a reasoning question. … Use standard mathematical notation… Present in bullet points… The last line should be: 'Answer: $VALUE'"_. This reflects stagnation. M-TextGrad also plateaus: its last 9 responses repeat a verbose prompt: _"Begin by summarizing the problem… Confirm the list is complete… Use consistent notation… Implement a verification step… Consider edge cases… Conclude with: 'Answer: $VALUE'"_. REVOLVE repeats a prompt for iterations 5–8: _"..., Clearly state the context… List each item… Use a consistent format… Verify the calculation… Final line: 'Answer: $VALUE'"_ but updates it in iteration 9 with new semantics. This suggests second-order behavior: it detects when progress stalls and makes a more informed shift to move forward. **Feedback Evolution:** In TextGrad, feedback stays repetitive: _"Clarify object listing." → "Use bullet points." → "Be more concise."_ but responses hardly improve. M-TextGrad shows oscillatory feedback: _"Too verbose, streamline." → "Missing verification, add back." → "Too shallow, expand explanation."_, feedback changes, but the response bounces back and forth. REVOLVE aligns feedback with evolving responses. Early iterations focus on structure: _"..., Add a verification step."_. Mid-phase addresses reasoning: _"..., Ensure entity counts match context."_. Later rounds provide refinement: _"..., Reduce redundancy."_ We hope this helps clarify the second-order behavior we observed. Full examples are now included in the revised paper. ___ **Q3. Additional cost of computing the second order effects?** We thank the reviewer for raising this question. To make the cost clearer, we’ve now added a per-iteration breakdown in the main paper: |Component|TextGrad|REVOLVE|Overhead| |----|----|----|----| | Feedback Collection|75.1s|75.1s|0| | Optimization|16.8s|63.0s|46.2s| | Total per Iteration|91.9s|138.1s|+46.2s| The extra cost mainly comes from the inclusion of past responses to let the model reason over how things have evolved. We’ve added this clarification to the paper. ___ **Q4. Lack of ablations for the similarity computation.** We thank the reviewer for the suggestion. While similarity is not computed explicitly, we ablate how similarity is conveyed to the LLM. In one version, we replace the task-performance-oriented signal ($L(r(p_t)) − L(r(p_{t-1}))$) with a simpler one based only on raw text differences ($r(p_t) − r(p_{t-1})$). |Dataset|Textual Similarity|REVOLVE| |----|----|----| |Objective Counting| 81.0|83.0| |GSM8K|84.6|84.6| These results confirm that task-performance-oriented similarity offers more gains, as it offers deeper understanding of response efficacy. We have added these results in the revised manuscript. ___ **Q5. Lack of confidence intervals or significance testing.** Thank you for pointing this out. For all tasks, we run five independent trials using different random seeds. The reported accuracy (e.g., 95.3% on Object Counting) reflects the average across these runs. As noted in Appendix E, we also use consistent seeds [15, 17, 21, 55, 91] for code optimization. We’ve now added confidence intervals to the main results, e.g., Table 1 would be updated as follows: For REVOLVE: * GPT-3.5: 95.5 ± 0.9% (3.9%↑) * GPT-4: 96.3 ± 0.6% (2.2%↑) * Gemini 1.5 Pro: 94.0 ± 0.0% (0.0%) * Llama 3.1: 83.0 ± 1.4% (7.8%↑) We’ve revised the paper accordingly to reflect this. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. A few more questions for clarity: - **Q3. Additional cost of computing the second order effects?** What task is this over? I imagine there should be a distribution, as opposed to single numbers. - **Q4. Lack of ablations for the similarity computation.**: It's not clear to me the numbers reflect a significantly better efficacy, and to understand it requires running the experience a few more times with statistical tests. Overall, I still support the publication of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the follow-up and continued support. We’re grateful for the encouraging remarks regarding the paper’s clarity, comprehensiveness, and overall quality. **Q3. Additional cost of computing the second order effects? What task is this over? I imagine there should be a distribution, as opposed to single numbers.** We appreciate the reviewer’s follow-up. The reported numbers are based on the BBH Object Counting task using LLaMA-3.1-8B-Instruct as the LLM backend. To provide a more complete picture, we report the per-run breakdown over 5 independent runs for each component of the iteration time: | Component | Run 1 |Run 2 |Run 3 |Run 4 |Run 5 | | ---- | ---- |---- |---- |---- |---- | | TextGrad | | | | | | | Feedback Collection (s) | 75.0|75.2|75.1|75.0|75.1 | | Optimization (s)|16.2| 16.8| 17.4| 16.5| 17.1| | Total per Iteration (s)|91.2| 92.0| 92.5| 91.5| 92.2| | REVOLVE | | | | | | | Feedback Collection (s) | 75.2| 75.1| 75.3| 75.2| 75.0 | | Optimization (s)|62.0| 63.3| 63.9| 62.8| 63.1| | Total per Iteration (s)|137.2| 138.4| 139.2| 138.0| 138.1| Summary Table: | Component | TextGrad (s) |REVOLVE (s) |Overhead (s) | | ---- | ---- |---- |---- | | Feedback Collection | 75.08 ± 0.08| 75.16 ± 0.11| +0.08 ± 0.15 | | Optimization | 16.80 ± 0.47| 63.02 ± 0.70| +46.22 ± 0.28 | | Total per Iteration | 91.88 ± 0.53| 138.18 ± 0.72| +46.30 ± 0.28 | This additional cost comes mainly from REVOLVE’s longer prompts, which include past responses to support second-order reasoning. While per-step runtime is higher, REVOLVE typically converges in fewer iterations, which often reduces overall compute cost. We’ve clarified this in the updated manuscript. ___ **Q4. Lack of ablations for the similarity computation: It's not clear to me the numbers reflect a significantly better efficacy, and to understand it requires running the experience a few more times with statistical tests.** We appreciate the reviewer’s follow-up and agree that statistical testing helps clarify the effectiveness of our design. To address this, we reran both variants—task-performance-oriented similarity (REVOLVE) and textual similarity on the Object Counting task using LLaMA-3.1-8B-Instruct as the backend. We use five fixed random seeds for consistency, and report the per-run accuracies below: | Random Seed | Textual Similarity (%)| REVOLVE (%) | | ---- | ---- |---- | | 15 | 80 | 83| | 17 | 82 | 84| | 21 | 81 | 84| | 55 | 79 | 81| | 91 | 81 | 83| A paired t-test yields a t-statistic of 3.21 (p = 0.033), which indicates the improvement is statistically significant at the p < 0.05 level. This supports our claim that task-performance-oriented similarity helps guide more effective updates than raw textual similarity. We’ve added these results and clarifications to the revised manuscript and again thank the reviewer for encouraging us to strengthen this part of the analysis.
Summary: In this paper, the authors introduce REVOLVE, which aims to simulate the second-order derivative during the optimization process. They compared their modified optimization prompts and found the outcome was better than TextGrad itself. ## update after rebuttal The authors explained the implementation difference between REVOVLE and M-Textgrad, which makes sense to me. Claims And Evidence: The authors claim that “REVOVLE can escape local optima.” To demonstrate this, they show that their method resembles a second-order derivative method. However, they do not explain why the similarity function $\mathcal{S}(r(p_t),r(p_{t-1}))$ can be interpreted as a gradient from an implementation perspective. In other words, the authors need to clarify how their method can be understood as presented in lines 198–200 of the formula. Methods And Evaluation Criteria: The authors tested several popular models on prompt optimization, solution optimization, and code optimization. The evaluation criteria is Accuracy (Completion Rate). The evaluation process makes sense. Theoretical Claims: The paper has no theoretical claims. Experimental Designs Or Analyses: The authors compare their mothods with several TextGrad baselines. Can the authors add `DSPy` baselines since DSPy optimizers are also very competitive? Supplementary Material: Yes, I reviewed the supplementary material. Relation To Broader Scientific Literature: 1. The proposed method aims to provide a better optimizer for TextGrad-based applications, which is useful and improtant. 2. The idea of applying a second-order textual optimization is novel. Essential References Not Discussed: The related works are quite comprehensive, covering various recent studies. Other Strengths And Weaknesses: 1. The paper explores integrating an exsiting idea from traditional optimization into textual optimization. Currently, textual optimization is still in its early stages, so such an integration is meaningful. 2. The authors need to clarify the difference between their similarity function and momentum, as neither involves second-order computation, and they appear similar in terms of implementation. Other Comments Or Suggestions: The authors sometimes use "Revolve" and sometimes use "REVOLVE." It should be standardized to one format. Questions For Authors: Can the authors explain the `implementation difference` between REVOVLE and M-Textgrad? I checked the prompts provided in the appendix, and I also checked the prompts in the original [TextGrad repo](https://github.com/zou-group/textgrad/blob/main/textgrad/optimizer/optimizer_prompts.py). It seems that the two implementations look similar. Can the author clarify what their main modification at the implementation level is? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your valuable time, insights, and highlight our strengths. We'll address your concerns in our following response. **Q1. The authors claim that “REVOVLE can escape local optima.”. However, they do not explain why the similarity function can be interpreted as a gradient from an implementation perspective? In other words, how can REVOLVE be understood as presented in lines 198–200 of the formula?** We thank the reviewer for the helpful question on how REVOLVE can be interpreted as a gradient-like method, and how this relates to lines 198–200 in the formula. **How REVOLVE identifies local optima:** REVOLVE tracks when responses stop improving, even as prompts continue to evolve. This is captured by the numerator of the similarity function: $L(r(p_t))-L(r(p_{t-1}))$, which reflects how much the task performance changes between responses. In implementation, we provide the LLM with both evaluator feedback and the context that generated it, allowing the model to implicitly assess whether performance is improving or stagnating across iterations. **How REVOLVE escapes local optima:** To move beyond stagnation, the model is guided by the denominator: $p_t-p_{t-1}$, which represents the degree of change in the prompt. If the response isn’t improving much despite prompt updates, this signals stagnation. To help the model escape, we encourage it to make stronger updates when past changes haven’t helped, and smaller ones when the trajectory is already improving. This is achieved through instructions like: _"Ensure future responses reflect a meaningful, gradual evolution based on past iterations, ..., avoiding abrupt shifts"_. **Why this resembles a gradient in practice:** Together, the performance difference (numerator) and the prompt change (denominator) form a ratio that functions like a gradient: it reflects whether updates are effective and how future steps should be adjusted. While we don’t compute this numerically, the LLM is given all the elements needed to assess it implicitly, which functions as a curvature-aware, gradient-like signal that informs future updates. ___ **Q2. The authors need to clarify the difference between their similarity function and momentum, as neither involves second-order computation, and they appear similar in terms of implementation.** We thank the reviewer for raising this important point. In essence, M-TextGrad focuses on repeating feedback, whereas REVOLVE targets repeating model responses. Implementation-wise, M-TextGrad looks at repeated feedback from the evaluator. If similar feedback appears across steps, it increases the update size, following the intuition that prior changes weren’t enough. This is similar to momentum in traditional optimization. The relevant instruction is: _"Similar feedbacks across different steps suggest that the modifications are insufficient… make more significant changes."_ In contrast, REVOLVE looks at repeated model responses. If the response itself doesn’t evolve meaningfully across iterations, we prompt the model to revise more significantly. This is achieved with the following prompt: _"Additionally, consider how the responses to this variable have changed across previous iterations: <PAST_ITERATIONS>{past_values}</PAST_ITERATIONS>. Make sure future responses reflect a meaningful, gradual evolution based on these past iterations, encouraging thoughtful progress rather than drastic shifts."_ The two methods differ mainly in where they look for signals: M-TextGrad focuses on feedback patterns, while REVOLVE monitors the model’s own output history. We find this shift helps the model better track its progress and avoid getting stuck. ___ **Q3. New experiments on the DSPy baseline.** We thank the reviewer for the suggestion. We’ve now included DSPy as a baseline in our prompt optimization experiments. To adapt it to our setting, we used GPT-4o to generate few-shot exemplars for LLaMA 3.1 8B. The results are shown below: | Dataset | | ||Accuracy % | | | ---- | ---- |---- |---- |---- |---- | | | CoT | DSPy | TextGrad | M-TextGrad | REVOLVE | | Object Counting | 65.0 | 75.0 | 77.0 | 80.0 | 83.0 | | GSM8K | 84.6 | 84.6 | 84.6 | 84.6 | 84.6 | On GSM8K, DSPy performs similarly to other methods, which we believe is due to task saturation. On the Object Counting task, however, DSPy underperforms REVOLVE. We hypothesize this is because DSPy’s pipeline-level optimization and demonstration tuning are less adaptive in iterative feedback settings. REVOLVE benefits from tracking how responses evolve, which helps guide the optimization more consistently. We’ve added these results to the manuscript and appreciate the helpful suggestion. ___ **Q4. Inconsistency usage of "Revolve" and "REVOLVE."** We thank the reviewer for pointing this out. We have standardized “REVOLVE” in the manuscript. --- Rebuttal Comment 1.1: Comment: The authors clearly explain the difference between their methods and TextGrad momentum. I will update my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful update. We're glad the distinction between REVOLVE and TextGrad momentum is now clearer. We truly appreciate your constructive feedback throughout the process.
null
null
null
null
null
null
null
null
Adversarial Robust Generalization of Graph Neural Networks
Accept (poster)
Summary: The paper investigates the adversarial robustness of Graph Neural Networks (GNNs) in node classification tasks. The authors propose a high-probability generalization bound for GNNs under adversarial attacks using covering number analysis. They derive bounds for several popular GNN models (GCN, GCNII, APPNP) and analyze the impact of architectural factors on adversarial generalization. The paper also provides experimental results on benchmark datasets to validate the theoretical findings, showing that factors like model architecture, graph filters, and regularization parameters influence the generalization gap under adversarial attacks. Claims And Evidence: The paper makes several claims regarding the generalization bounds of GNNs under adversarial attacks. While the theoretical framework is well-structured, the evidence supporting these claims is not entirely convincing. The experimental results, though consistent with the theoretical predictions, are limited in scope and do not fully validate the broad applicability of the proposed bounds. The authors rely heavily on synthetic or controlled settings, and the generalization to real-world scenarios remains unclear. Additionally, the paper lacks a thorough comparison with state-of-the-art adversarial training methods, which weakens the claim of providing a comprehensive understanding of adversarial robustness in GNNs. Methods And Evaluation Criteria: The methods proposed in the paper, particularly the covering number analysis, are theoretically sound and appropriate for analyzing the adversarial robustness of GNNs. However, the evaluation criteria are somewhat limited. The experiments are conducted on standard benchmark datasets, but the adversarial attacks used (e.g., PGD) are relatively simple and do not cover the full spectrum of possible adversarial perturbations. The paper would benefit from evaluating the proposed bounds against more diverse and challenging attack scenarios, as well as comparing with other adversarial training techniques. Theoretical Claims: The theoretical claims are based on covering number analysis, which is a well-established tool in statistical learning theory. The proofs provided in the appendix appear to be correct, but the paper lacks a detailed discussion of the assumptions made (e.g., Lipschitz continuity of the loss function and model architecture). These assumptions may not hold in practice, especially for more complex GNN architectures or non-smooth loss functions. For instance, though Assumption4.1 maybe satisfied in standard non-graph neural networks, it could be violated in graph neural networks due to the message passing on the interdependent graph data. The paper would benefit from a more thorough exploration of the limitations of these assumptions and their impact on the generalization bounds. Experimental Designs Or Analyses: The experimental design is reasonable but lacks depth. The authors evaluate the generalization gap under adversarial attacks for different GNN models, but the experiments are limited to a few datasets and attack methods. The results, while consistent with the theoretical predictions, do not provide strong empirical evidence for the robustness of the proposed bounds. The paper would benefit from more extensive experiments, including comparisons with state-of-the-art adversarial training methods and evaluations on larger, more diverse datasets. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is well-situated within the broader literature on adversarial robustness and GNNs. It builds on prior work in adversarial training and generalization analysis, particularly in the context of graph-structured data. However, the paper does not sufficiently highlight how its contributions advance the state-of-the-art. While the theoretical bounds are novel, the practical implications and applications of these bounds are not clearly articulated. The paper would benefit from a more detailed discussion of how the proposed bounds compare to existing methods and what new insights they provide. Essential References Not Discussed: The referred papers (Szegedyetal.,2013; Goodfellowetal.,2014) are irrelevant to GNN applications The paper misses many relevant papers on GNN attacks and defenses Other Strengths And Weaknesses: Strengths: * The paper addresses an important and timely problem in the field of adversarial robustness for GNNs. * The theoretical framework is well-structured and provides a solid foundation for analyzing the generalization properties of GNNs under adversarial attacks. Weaknesses: * The empirical evaluation is limited in scope and does not fully validate the broad applicability of the proposed bounds. * The paper lacks a thorough comparison with state-of-the-art adversarial training methods, which weakens its claim of providing a comprehensive understanding of adversarial robustness in GNNs. * The assumptions made in the theoretical analysis (e.g., Lipschitz continuity) are not thoroughly discussed, and their practical implications are not explored in depth. * The paper does not clearly articulate how its contributions advance the state-of-the-art or provide new insights beyond existing work. * The theoretical results is only for node feature perturbation, while graph structure perturbation is more common against GNNs. * The paper misses many relevant work on GNN attacks and defenses Other Comments Or Suggestions: Whats the key challenges/difficulties of the derived theoretical results, compared with the existing theoretical results on non-graph data? Can the proposed theoretical result be applied to graph structure perturbation? What type of GNN architecture is suited to the derived theoretical results? How to calculate the generalization gap in the evaluations? The paper uses many assumptions which makes me doubtful on calculating the values of variables in the theoretical gap. Questions For Authors: While the paper presents an interesting theoretical framework for analyzing the adversarial robustness of GNNs, the empirical evaluation is insufficient to support the broad claims made by the authors. The lack of comparison with state-of-the-art methods and the limited scope of the experiments weaken the paper's contribution to the field. Additionally, the assumptions made in the theoretical analysis are not thoroughly discussed, and their practical implications are not explored in depth. For these reasons, I recommend rejecting the paper in its current form. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and valuable suggestions. However, **we would like to clarify the first misunderstanding below**: > 1. Lack of new insights, comparison with existing methods, and application to real-world scenarios. **A1**: Generally speaking, our focus **does not lie in proposing a competitive algorithm tailored for a real-world application scenario** (including real-world datasets and attack scenarios). Instead, this paper **aims at a broader theoretical exploration of the robust overfitting in a general adversarial scenario**. Our work not only develops a **novel analytical framework** for general GNNs (Theorem 4.8), but also provides **helpful insights** into model construction and algorithm designs (Proposition 4.14~4.18). So, it is imperative for us to further clarify that **" The lack of comparison with state-of-the-art methods and the limited scope of the experiments weaken the paper's contribution to the field" is not valid.** To be specific, this paper focuses on the robust overfitting phenomenon of GNN and provides theoretical guidance for improving their robust generalization in a general adversarial scenario. Based on our theoretical results, our empirical evaluation focuses on the influencing factors (some model architecture-related factors, like graph filter, weight norm, hyperparameters, etc.) and demonstrates their important roles in improving (or deteriorating) the adversarial generalization. > 2. Analytical challenges introduced by graph data. **A2**: **Challenge 1**: The information interaction of nodes leads to the correlation of perturbations between different nodes, making the adversarial perturbation set of graph data different from that of non-graph data. **Solvement 1**: In adversarial settings, we search for the worst perturbation vector $\delta$ from all node features that consists of a perturbation matrix. Then, by incorporating the worst perturbation vector $\delta$ into coverage analysis, Lemma 4.6 reveals an additional term $(\frac{6\theta C_{\ell}K_f}{\epsilon})^d$ influencing generalization caused by the interaction between perturbed nodes. **Challenge 2**: Each node in GNN aggregates messages from its neighbor nodes through the message-passing mechanism, making the complexity measure of GNN model class different from NN. **Solvement 2**: In the decomposition of the propagation process (Proposition 4.14~4.18), we pay attention to the information interaction of graph data in the propagation process, which is reflected in the graph filter $\sum_{j=1}^n[g(A)]_{ij}$. > 3. Lack of discussion of the assumptions made. **A3**: Our analytical framework doesn't require smoothness assumptions of the loss function. This paper only needs the Lipschitz continuity assumption of loss function (Assumption 4.2) and activation function (Assumption 4.10), which can be easily satisfied by some commonly used functions (eg, cross-entropy and hinge loss; Sigmoid and ELU). Other assumptions about the norm constraint of input feature and weight matrix are also usually used in literature [1, 2]. In particular, for **Assumption 4.1**, we give the specific Lipschitz constant of each GNN model. For example, for a two-layer GCN, and $X(\delta)=[x_1+\delta,\dots,x_n+\delta]$, we have $\Vert f_i(A,X(\delta),W)-f_i(A,X(\delta'),W)\Vert$ $\leq\rho_2\Vert\sum_{j=1}^n[g(A)] _ {ij}[\sigma(g(A)X(\delta)W_1] _ {j*}W_2-\sum_{j=1}^n[g(A)] _ {ij}[\sigma(g(A)X(\delta')W_1] _ {j*}W_2\Vert$ $\leq \rho_2w_2 \Vert g(A)\Vert\max_j\Vert[\sigma(g(A)X(\delta)W_1] _ {j*}-[\sigma(g(A)X(\delta')W_1] _ {j*}\Vert$ $\leq\rho_1\rho_2w_1w_2\Vert g(A)\Vert^2\max_j\Vert [X(\delta)] _ {j*}-[X(\delta')] _ {j*}\Vert$ $\leq K_{GCN}\Vert\delta-\delta'\Vert$. Given the GCN with ELU activation and normalized filter ($\rho=1$, $\Vert g(A)\Vert=1$), conducting adversarial training with Algorithm 1 ($\Vert W\Vert$ is controlled) could lead to a controllable and small Lipschitz constant. > 4. Irrelevance and misses of some referred papers. **A4**: Thanks for pointing out the inaccuracy and misses. We will rectify the incorrect references (see [3,4]). As this paper focuses more on the theoretical analysis for adversarial training of GNNs, we will add additional related works about GNN attacks and defenses in the appendix for reference. > 5. Applicable GNN types > 6. Extension to structure perturbations. **A5/A6**: Please refer to **A2/A4** to the first reviewer (jE3b). ------ [1] Zhou, X. and Wang, H. The generalization error of graph convolutional networks may enlarge with more layers. Neurocomputing, 424:97–106, 2021. [2] Tang, H. and Liu, Y. Towards understanding generalization of graph neural networks. ICML, pp. 33674–33719. PMLR, 2023. [3] Ben, F., et al. Single-node attacks for fooling graph neural networks. Neurocomputing, 513:1–12, 2022. [4] Liu, T., et al. Nt-gnn: Network traffic graph for 5g mobile iot android malware detection. Electronics, 12(4):789, 2023.
Summary: This paper investigates the generalization ability of graph neural networks (GNNs) under adversarial training, which is an important and widely interested research direction. The paper first proposes a high probability generalization limit and analyzes the generalization ability of GNN under adversarial training through covering number analysis. This provides theoretical support for understanding the behavior of GNN under adversarial learning. The paper selected three representative GNN variants for experiments and proposed a new adversarial training algorithm, which proved its effectiveness in improving the stability of GNN training through experiments. Claims And Evidence: This paper has rigorous formula derivation, which can theoretically support the method proposed in this article. Clear logic in experimental procedures and methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria mainly focus on analyzing the adversarial generalization ability of GNNs, which can be applied to the robustness research of GNNs and improving their generalization ability. Theoretical Claims: This paper has rigorous and extensive formula reasoning, and in my reading, I did not see any obvious errors. Experimental Designs Or Analyses: In the main text and appendix of the paper, the author conducted extensive experiments on six benchmark datasets to support the claims of this article. However, the description of Algorithm 1 seems somewhat vague. Supplementary Material: I did not review the supplementary materials. Relation To Broader Scientific Literature: I'm not clear enough. Essential References Not Discussed: I'm not clear enough. Other Strengths And Weaknesses: Advantages 1.The paper has a clear structure, rigorous logic, and standardized use of symbols. 2.This paper establishes the high probability generalization limit of GNNs in adversarial learning, providing theoretical guidance for the design and training of GNNs. Weekness The derivation process of the coverage analysis method is relatively complex and may be difficult to understand and apply. Other Comments Or Suggestions: nothing Questions For Authors: 1.You established a high probability generalization limit for GNNs in adversarial learning in your paper. Can the derivation process of this boundary be further simplified? 2.You have proposed an adversarial training algorithm to learn a robust GNNs, but it seems that there is no clear description of the algorithm. 3. Why does the experimental part of this paper seem to lack a comparative study with previous work? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply thank you for acknowledging the rigorous logic, clear structure, and extensive formula reasoning of our work. Below are our detailed responses. > 1. Why does the experimental part of this paper seem to lack a comparative study with previous work? **A1**: Generally speaking, our focus **does not lie in proposing a competitive algorithm tailored for a specific application scenario**. Instead, this paper **aims at a broader theoretical exploration of robust overfitting in a general adversarial scenario**. To be specific, this paper focuses on the robust overfitting phenomenon of GNNs and provides theoretical guidance for improving their robust generalization in a general adversarial scenario. Our work not only develops a **novel analytical framework** for general GNNs (Theorem 4.8), but also provides **helpful insights** into model construction and algorithm designs (Proposition 4.14~4.18). Based on our theoretical results, our empirical evaluation focuses on the influencing factors (some model architecture-related factors, like graph filter, weight norm, number of layers, hyperparameters, etc.) and demonstrates their important roles in improving (or deteriorating) the adversarial generalization. > 2. The description of Algorithm 1 seems somewhat vague. **A2**: Thanks for your helpful suggestion. Let $\mathcal{A}$ be a gradient-based attack algorithm (e.g., PGD, BIM, Mettack), the updated version is provided below. --- **Input**: Graph $G=(A,X)$, dataset $S$, perturbed dataset $\tilde{S}$, perturbation budget $\theta$, regularization parameter $\lambda$, initialization $W_0$, learning rate $\eta$, number of iterations $T$. **while** $t<T$ **do** $\tilde{S}\leftarrow\emptyset$. **for** $i = 1, 2, \ldots, n$ **do** For the input matrix $X_t=[x_{1,t},\dots,x_{n,t}]$, perturb $\tilde{X_t} \leftarrow X_t+\mathcal{A} (X_t,A, \theta)$. ​ For each perturbed node in $\tilde{X_t}=[\tilde{x} _ {1,t},\dots,\tilde{x} _ {n,t}]$, append it to $\tilde{S_t}$ and choose $m$ samples randomly to the training set $\tilde{S} _ {m,t}$. ​ **end for** ​ Define a new objective $L(W _ {i,t})=\frac{1}{m}\sum_{\tilde{X} _ {i,t}\in\tilde{S} _ {m,t}} \ell(f _ {i,t}(A,X, W), y _ {i,t}) + \lambda \Vert W _ {i,t}\Vert _ {\infty}$. ​ For all $i\in [m]$, update $W_t$ using SGD: $W_{i,t+1}\leftarrow W_{i,t}-\eta\nabla {L}(W_{i,t})$. **end while** --- > 3. Can the derivation process of coverage analysis be simplified? It seems to be difficult to understand and apply. **A3**: Covering number is a commonly used tool in (adversarial) generalization analysis [1,2]. Please let me briefly introduce our analysis technology first. **1. For the maximization over adversarial loss ( $\max_{\tilde{X}}\ell(f_i(A,\tilde{X},W),y_i)$)**(Lemma 4.4). We construct new function classes ( $L$ and $L_{dis}$) and use their covering number to control the covering number of the adversarial loss class $L_{adv}$. **2. For the interplay between perturbed nodes** (Lemma 4.6). We cover the perturbation set $\mathcal{B}$ and transform the cover of the loss class $L_{disc}$ to that of the perturbed model function class $\hat{F}$ . **3** (Theorem 4.8). Now we can obtain the relation between the adversarial generalization and the covering number of GNN model class! **4. Covering number derivation** (Proposition 4.14~4.18). We utilize the relation between the model function $\hat{F}$ and its weight matrix $W$ to derive the covering number of the perturbed GNN class $\mathcal{N}(\hat{F},\epsilon,\Vert\cdot\Vert)$ by that of the weight matrix set $\mathcal{N}(${$W_j,\Vert W_j\Vert\leq w_j$}$,\epsilon_j,\Vert\cdot\Vert)$. Actually, the main step that requires complex calculations is step 4, which is due to the propagation rules of GNN. Therefore, given the Lipschitz continuity assumption about the adjacency matrix, our work can be applied to topology attack scenarios (by step 1-3). Since our covering number-based framework is applicable to general GNNs, given a specific GNN model with the relation between model function and weight matrix, it can be applied to other types of GNN models (by step 4). ---- [1] Tang, H. and Liu, Y. Towards understanding generalization of graph neural networks. ICML, pp. 33674–33719. PMLR, 2023. [2] Tu, Z. et al. Theoretical Analysis of Adversarial Learning: A Minimax Approach. Neurips, 32, 2019.
Summary: This paper establishes an adversarial generalization bound of various GNNs, such as GCN, APPNP, GCNII, in the context of transductive learning. The authors provide some guidlines for adversarial generalization based on the theoretical results. The guidlines based on theoretical results are all validated in experimental results. ## update after rebuttal I had no specific concerns about this paper during my initial review. Therefore, I am maintaining my original score after the rebuttal. Claims And Evidence: Yes, the logical flow of the paper is sound, and all claims may be well supported through both theoretical analysis and empirical evidence. Methods And Evaluation Criteria: While the paper does not propose a specific method, it offers valuable hyperparameter guidelines for improving robustness across different GNN backbones. The evaluations are conducted on a variety of well-established datasets, yielding consistent and reasonable results. Additionally, the inclusion of diverse GNN backbones strengthens the validity and generalizability of the theoretical findings. Theoretical Claims: I am not confident in assessing the correctness of the theoretical claims or proofs. I did not verify the detailed steps of the proofs, so I cannot confirm their validity. Please take this into consideration. Experimental Designs Or Analyses: The theoretical analyses appear sound and comprehensive. Moreover, the empirical results are thoughtfully designed to support and validate the theoretical findings. Supplementary Material: No Relation To Broader Scientific Literature: Contributing to Learning Theory of GNN adversarial generalization. Essential References Not Discussed: None Other Strengths And Weaknesses: - Important topic and solid theoretical analyses. - Writing quality is very good. - All theoretical findings are supported by the empirical results. Other Comments Or Suggestions: - Fig 2(a) caption might be "Experiments of adversarial training for APPNP." Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate your acknowledgment of the solid and comprehensive theoretical analysis and the thoughtfully designed empirical results presented in our paper. Thanks for pointing out the typo, and we will fix it in the future version.
Summary: The paper investigates the adversarial robust generalization of GNNs through a theoretical lens. It derives high-probability generalization bounds for general GNNs in adversarial settings using covering number analysis. The key insight is modeling the adversarial loss class’s complexity by constructing a perturbation cover and analyzing GNN architectures (e.g., GCN, APPNP, GCNII). Theoretical results reveal that adversarial generalization depends on factors like perturbation budget, model depth, weight norms, and graph filters. Experiments on benchmark datasets validate these findings, showing that normalized graph filters, shallower architectures, and regularization reduce generalization gaps. Claims And Evidence: The claims are supported by theoretical proofs. Methods And Evaluation Criteria: The methods and evaluation make sense. Theoretical Claims: The proofs are logically sound. Experimental Designs Or Analyses: Experiments vary layers, filters, and attack budget on standard datasets, but there are some weaknesses: 1. The accuracy difference metric varies depending on the dataset split and does not appear to have been tested for a sufficient number of randomized splits. 2. The datasets used are small and large-scale datasets are missing. 3. Experimental validation was performed on only three GNNs. Supplementary Material: Appendix includes proofs of key lemmas (C.1–C.4), additional experiments (Figures 7–27), and setup details. Relation To Broader Scientific Literature: The work extends adversarial generalization theory to GNNs, addressing unique challenges like transductive learning. Essential References Not Discussed: There are no essential related works that are not cited. Other Strengths And Weaknesses: There are some weaknesses: - The work seems to discuss only the case of counter-attacks against node attributes. However, for graph learning, attacks against structures are more extensively studied. - Can the proposed theoretical framework be adapted to other commonly used GNNs such as GAT, GraphSAGE, GIN, etc.? - The experiments conducted were limited, as described earlier. Other Comments Or Suggestions: I don't have other comments or suggestions. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments! Please refer to our response below. > 1. Lack of testing for randomized splits of datasets. **A1**: Thanks for pointing out the lack of considering the impact of dataset splitting. Taking two-layer GCN and two datasets for example, we show the generalization gap under different random split rates of training data (including 0.1, 0.3, and 0.5) in the table below. | | | Cora | | | CoraFull | | | ------------ | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | | 0.1 | 0.3 | 0.5 | 0.1 | 0.3 | 0.5 | | $\theta=0$ | 0.478$\pm$0.013 | 0.296$\pm$0.012 | 0.255$\pm$0.014 | 0.396$\pm$0.003 | 0.284$\pm$0.002 | 0.208$\pm$0.003 | | $\theta=0.1$ | 0.516$\pm$0.014 | 0.306$\pm$0.012 | 0.261$\pm$0.012 | 0.418$\pm$0.003 | 0.301$\pm$0.002 | 0.223$\pm$0.002 | | $\theta=0.2$ | 0.551$\pm$0.016 | 0.309$\pm$0.014 | 0.269$\pm$0.015 | 0.434$\pm$0.002 | 0.311$\pm$0.001 | 0.234$\pm$0.002 | The results show that they have **consistent trends** under increasing perturbation budgets. We will include a comprehensive version of the experiments in our future version. > 2. Adaptation to other GNNs. **A2**: Our results provide a general analytical framework for GNN, and give three classical examples of spectral GNN. Although specific results are not presented in the paper, **spectral GNN** like SGC, AGCN and GPR-GNN are feasible. Moreover, we are reasonably confident in extending our results to other types of GNN (spatial-based GNN). Taking a single-head GAT for example, as for the model function $f_i(X,W)=\sigma_2(\sum_j\alpha_{ij}W_2X_{j*})$, and $\alpha_{ij}=softmax(\sigma_1(w[W_1X_{i*}\Vert W_1X_{j*}]))$, we can get the relation between the covering number of the model function class and that of the weight matrix set (i.e. $\mathcal{N}(\hat{F},\epsilon,\Vert \cdot\Vert)$ and $\mathcal{N}(\{W_j,\Vert W_j\Vert\leq w_j\},\epsilon_j,\Vert\cdot\Vert)$), which can be applied to our analytical framework. > 3. Large-scale datasets are missing. **A3**: Thanks for your valuable suggestions. Large-scale datasets like Nell and ogbn-arxiv will be included in our future versions. > 4. Extension to attacks against structures. **A4**: Given the similar adversarial settings for topology attacks and node attacks, **we suggest that the methodology (e.g., Lemma 4.4 and Theorem 4.8) developed in this paper could be expanded upon the topology attack.** To be more specific, let the adversarial graph be generated from $\{\tilde{A}:\Vert\tilde{A}\Vert\leq \gamma \}$, where $\tilde{A}=A - A' $ denotes the perturbation matrix added to the original adjacency matrix. The adversarial loss w.r.t. adversarial graph is defined by $\max_{\Vert\tilde{A}\Vert \leq \gamma} \ell(f_i(\tilde{A},X,W),y_i) $. Analyzing analogously to the node attacks. For each function $f\in\mathcal{F}$ and a fixed $\tilde{A} _ c\in\mathcal{A}$, we construct a new function $h:\mathcal{Z}\rightarrow(\mathbb{R}^n)^{\mathcal{A}}$ as $h(z_i,\tilde{A} _ c)=\ell(f_i(\tilde{A} _ c,X,W),y_i)$. The adversarial loss is denoted by $\max_{\tilde{A}\in\mathcal{A}} h(z_i,\tilde{A})=\max_{\Vert\tilde{A}\Vert \leq \gamma}\ell(f_i(\tilde{A},X,W),y_i) $ for any $\tilde{A}\in\mathcal{A}$. From the definition of covering number, we construct the cover of the class $H$ of function $h(z_i,\tilde{A} _ c)$ and obtain the cover of the class $H_{adv}$ of function $\max_{\tilde{A}\in\mathcal{A}} h(z_i,\tilde{A})$. Thus, the following inequality holds $\mathcal{N}(H_{adv},\epsilon,|\cdot| _ {\infty})\leq\mathcal{N}(H,\epsilon,\Vert\cdot\Vert _ {\infty}).$ Next, we construct a cover to control the infinite class $\mathcal{A}$ by $\mathcal{C} _ {\mathcal{A}}:=${$\hat{A} _ j,j\in[N_A]$}. Similarly, we can obtain the relation between $\mathcal{N}(H,\epsilon,\Vert\cdot\Vert_{\infty})$ and $\mathcal{N}(H_{dis},\epsilon,\Vert\cdot\Vert_{\infty})$, which needs an assumption that $|\max h(z,A)-\max h(z,A')|\leq L_A\Vert A-A'\Vert$, where the constant $L_A$ can be obtained if given specific GNN models. This allows us to solve the measurement difficulty caused by the graph structure perturbations and apply it to our main results (Theorem 4.8). The remaining analysis will be left to future work. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it has addressed some of my concerns, but the limited experiments and the utility of the theory are still my concerns. I have raised my score accordingly but am leaning towards a borderline acceptance. --- Reply to Comment 1.1.1: Comment: Thank you so much for increasing the score. We understand the limitations you mentioned and will take them as guidance for further improvement. Your recognition means a great deal to us.
null
null
null
null
null
null
CoPINN: Cognitive Physics-Informed Neural Networks
Accept (spotlight poster)
Summary: The paper presents a novel framework called Cognitive Physical Informed Neural Network (CoPINN) to address the Unbalanced Prediction Problem. CoPINN employs separable subnetworks to encode one-dimensional coordinates, aggregates them to predict multi-dimensional variables, and dynamically evaluates sample difficulty based on PDE residual gradients. It progressively optimizes sampling regions from easy to hard using a cognitive training scheduler, significantly reducing prediction errors in challenging areas. Claims And Evidence: yes, this paper is supported by clear and convincing evidence. Methods And Evaluation Criteria: yes, the proposed method and evaluation criteria make sense. Theoretical Claims: Yes, I checked the theoretical part of self-paced learning in this paper. Experimental Designs Or Analyses: Yes, I checked the soundness and validity of all experimental designs and analyses in this paper. Supplementary Material: yes, I checked all the content in the supplementary material. Relation To Broader Scientific Literature: Although this paper only focuses on the field of physics-informed neural networks (PINNs) and uses self-paced learning (SPL) to improve the overall performance of PINNs, the proposed SPL method has important reference value for other fields, such as classification, clustering, and retrieval. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1.The paper clearly articulates the research motivation with illustrative examples. 2.The paper has sufficient experiments, for example, comparative experiments with seven latest methods on five datasets. 3.The proposed method achieves significant improvements over the SOTA, for example, about 90% improvement on Helmholtz and about 70% improvement on (3+1)-d Klein Gordon Equation. 4.This paper innovatively uses self-paced learning to solve the key unbalanced prediction problem (UPP) in the existing PINNs methods. 5.The proposed method is simple but effective. Weaknesses: 1.Although the results on the Helmholtz and (2+1)-d Klein-Gordon datasets are significantly better than those on the SOTA method, why is the improvement on the (3+1)-d Klein-Gordon and Diffusion datasets limited? This requires further analysis. 2.The authors compared their method with PINNs and FPINNs, which are not good at solving high-dimensional PDEs, and the experimental setting seems unfair. See questions for more. Other Comments Or Suggestions: No. Questions For Authors: 1.CoPINN consists of Separable learning and Cognitive Training Scheduler. Are the proposed Cognitive Training Scheduler and Separable learning coupled or can they be separated? That is, can the Cognitive Training Scheduler be used for the vanilla PINN? 2.The paper mentions using different numbers of sampling points, i.e. 16^3, 32^3,...,256^3, to train the neural network. Is the same number of sampling points also used for testing? Or is it another number? 3.Hyper parameter $\beta$ has a great impact on the performance of the algorithm. What value is recommended for solving other equations, such as the NS equation? 4.Although the results on the Helmholtz and (2+1)-d Klein-Gordon datasets are significantly better than those on the SOTA method, why is the improvement on the (3+1)-d Klein-Gordon and Diffusion datasets limited? This requires further analysis. 5.What is the performance removing Cognitive Training Scheduler? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate your detailed comments. We believe the following point-to-point response can address all the concerns: **Q1: Weaknesses (1) and Questions For Authors(4)** **R1:** Compared with Helmholtz and (2+1)-d Klein-Gordon datasets, (3+1)-d Klein-Gordon and Diffusion datasets are relatively less challenging. Therefore, almost all the methods achieve good results. In fact, because most of these two datasets are easy, which leads to limited mining of difficult samples by our CoPINN, and the performance advantage is relatively unobvious. In fact, CoPINN yields improvements of $50\\%$ (Klein-Gordon in (3+1)-d) and $10\\%$ (diffusion equations). **Q2: Weaknesses (2)** **R2:** We claim that PINN and FPINN can solve high-dimensional PDEs. However, as the number of training (collocation) points increases, especially for higher-dimensional or more complex problems, their computational burden becomes more pronounced. To solve this problem, we adopt a separable architecture to efficiently handle high-dimensional PDEs. In addition, we also compare with SPINN (which is good at solving high-dimensional PDEs), so the experimental setting is fair. **Q3: Questions For Authors(1)** **R3:** Separable learning and Cognitive Training Scheduler are two independent modules. To be specific, Separable Learning mitigates the computational burden in high-dimensional PDE solutions. Conversely, the Cognitive Training Scheduler assesses sample-level difficulty, enabling neural networks to fit samples from easy to hard when solving PDEs. Cognitive Training Scheduler is plug-and-play and can be used for the vanilla PINN. **Q4: Questions For Authors(2)** **R4:** We apologize for the lack of clarity. Following the setup of SPINN, for Helmholtz, (2+1)-d Klein-Gordon, (3+1)-d Klein-Gordon, Diffusion and flow Mixing 3D equations, we set the sampling points to $100^3, 100^3, 50^3, 101^3$, and $100^3$ during testing, respectively. We will include this in the implementation details of the next version. **Q5: Questions For Authors(3)** **R5:** Through parameter sensitivity analysis in Figure 5, we find that the performance is better when the $\beta$ value is $0.01$ to $0.00001$. In fact, if $\beta$ is fixed to $0.001$, CoPINN achieves SOTA on all datasets. Therefore, for other PDEs, such as the N-S equation, we recommend that the $\beta$ value be selected in $\\{0.01, 0.001, 0.0001, 0.00001\\}$. **Q6: Questions For Authors(5)** **R6:** In Section 3.3 of our original paper, we conduct an ablation study to analyze the effectiveness of each component in the cognitive training scheduler of our CoPINN. CoPINN-1 represents using the original loss function, i.e., removing the Cognitive Training Scheduler. The results in Table 2 show that the performance of CoPINN-1 is lower than our proposed CoPINN. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. The authors have overcome all my concerns. This paper reveals and studies a less-touched unbalanced prediction problem (UPP) in PINNs. By imitating the human cognitive learning process, the authors proposed a novel cognitive PINN framework to adaptively optimize the model from easy to hard, thereby alleviating the negative effect of the hard samples in stubborn regions during the learning process. In general, the motivation of this paper is clear, and the idea is interesting and novel. This approach inspires us to solve the unbalanced prediction problem of stubborn regions in pinn from a cognitive learning perspective. Numerous experiments also show that the proposed method has achieved significant improvement compared with these state-of-the-art methods, which proves the feasibility of cognitive learning for PINN. Therefore, I raise my rate and recommend acceptance of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review of our manuscript. We sincerely appreciate the time and effort you dedicated to evaluating our work and providing insightful comments.
Summary: The paper proposes an adaptive sample weighting strategy for physics-informed neural networks. As a measure of difficulty for a sampling point, the magnitude of the (input) gradient of the PDE residual is proposed. The authors suggest to train PINNs via assigning high sample weights to easy samples early on in the training process and then gradually shift towards hard samples, reducing the weight for the easy ones. The authors then compare their proposed method on a number of example equations and benchmark against several baselines. The experimental results suggest improved accuracy compared to the baseline method. ## update after rebuttal The authors were responsive and could clear some of my doubts and misunderstandings. I will raise my score. Claims And Evidence: The improved accuracy and training is illustrated by a reasonable set of numerical simulations. Methods And Evaluation Criteria: The set of numerical examples is reasonable and the problems considered are of suitable difficulty to be interested. The comparison to existing methods in the literature is sound. The manuscript would however benefit from a quantification of the randomness (over the network initialization) in the training process. The authors might consider to do this for a subset of the experiments as an evaluation for the full set of experiments is likely unreasonably computationally expensive. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The considered PDE problems are reasonable. Supplementary Material: I read the Appendix. Relation To Broader Scientific Literature: Some related literature is not discussed: There are a number of works concerning adaptive sampling for PINNs, see for instance [1], [2] and the references therein. Interestingly, these works take a different approach, focusing on hard samples more than on easy samples via adapting the sample distribution based on the PDE residual. This seems somewhat contradictory to the presented findings and requires a thorough discussion. Moreover, the authors of [1] also present a convincing argument for focusing on high residual/difficult regions first : For many examples — take for instance the convection equation example in [1] — the PDE residual without the boundary conditions possesses trivial minimizers, namely constant functions. Focusing on interior regions first seems therefore not reasonable. [1] https://arxiv.org/pdf/2207.02338 [2] https://www.sciencedirect.com/science/article/abs/pii/S0045782522006260 Essential References Not Discussed: See above, especially [1]. Other Strengths And Weaknesses: Strengths: - Convincing numerical results. Weaknesses: - The "conflict" with the existing literature should be discussed and ideally resolved. As mentioned above, existing sampling methods focus on hard samples, whereas the present work starts with easy samples. - A quantification of the randomness of the training process should be provided, at least for some experiments. Is the training process more brittle than the baseline methods? Other Comments Or Suggestions: - The resolution of Figure 1 should be increased. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your valuable comments. Below is our point-by-point response. **Q1: Essential References Not Discussed & Questions For Authors (1)** **R1:** 1) We will include the discussion about adaptive sampling in the Introduction and Related Work Sections of the next version. The key to adaptive sampling is to improve the learning efficiency and accuracy of the neural network by dynamically selecting or reallocating sampling points. Specifically, for [1] you mentioned, it balances regions of high and low residuals by dynamically resampling. For [2] you mentioned, it uses resampling to increase the sampling points in high PDE residual regions to avoid so-called propagation failures. We further add a review of other literature: [3] proposes a risk min–max framework to do adaptive sampling to speed up the convergence of the loss and achieve higher accuracy. 2) From a technical perspective, adaptive sampling technology uses PDE residuals to adjust sample density. Our CoPINN uses the gradient of PDE residuals to estimate the learning difficulty of samples, thereby optimizing the learning process of the neural network. From a conceptual perspective, adaptive sampling technology increases the density of difficult samples while reducing the density of simple samples, thereby fully learning difficult samples. Unlike them, our CoPINN does not involve changes in sampling and sample density. CoPINN directly evaluates the learning difficulty of samples and imitates the human cognitive process to learn from easy to difficult, thereby improving accuracy. In general, adaptive sampling focuses on adjusting the number of difficult and easy samples, and CoPINN uses the early learning of simple samples to fully mine information from difficult samples in the later stage of learning. They are two different technical routes and there is no conflict. 3) [2]proposes that the correct solution of PINN should propagate from the boundary to the center, otherwise, it will cause trivial solutions. Therefore, PINN training should focus on the boundary. Our CoPINN doesn‘t ignore boundary conditions and initial conditions during training. Although CoPINN pays less attention to difficult samples (usually at the boundary) according to the gradient of PDE loss in the early stage of training, it does not ignore difficult samples. As learning progresses, CoPINN pays more attention to difficult samples. Therefore, CoPINN doesn't ignore difficult samples in the early stage of learning, which will not cause trivial solutions mentioned by the reviewer and is therefore reasonable. [1]A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. [2]Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling. [3]A Gaussian mixture distribution-based adaptive sampling method for physics-informed neural networks. **Q2: Questions For Authors (2)** **R2:** Regarding your question, "A quantification of the randomness of the training process ", we don‘t understand what it means. Based on your comment, we guess that you are doubtful about the stability of our CoPINN. In our paper, all quantitative experiments are repeated 5 times with different random seeds, and the average values ​​are reported. To demonstrate the stability of CoPINN, below, we report the mean and standard deviation of the relative $L_2$ error on the Helmholtz dataset. The results show that CoPINN achieves both the lowest mean and standard deviation in relative $L_2$ error, and confirm that CoPINN's training process is more robust than competing methods. If we misunderstood, please clarify, and we will provide further details. | | $16^3$ | $32^3$ | $64^3$ | $128^3$ | $256^3$ | | ------------ | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | | AHD-PINN | $0.2108\pm0.0181$ | $0.1903\pm0.0418$ | $0.1871\pm0.0372$ | O/M | O/M | | RoPINN | $0.4059\pm0.0353$ | $0.3338\pm0.0502$ | O/M | O/M | O/M | | FPINN | $0.3862\pm0.0714$ | $0.3502\pm0.0676$ | $0.3097\pm0.0580$ | O/M | O/M | | SPINN | $0.1177\pm0.0451$ | $0.0809\pm0.0104$ | $0.0592\pm0.0316$ | $0.0449\pm0.0337$ | $0.0435\pm0.0280$ | | SPINN(m) | $0.1161\pm0.0084$ | $0.0595\pm0.0113$ | $0.0360\pm0.0082$ | $0.0300\pm0.0016$ | $0.0311\pm0.0100$ | | CoPINN(Ours) | $0.0172\pm0.0052$ | $0.0050\pm0.0030$ | $0.0016\pm0.0006$ | $0.0007\pm0.0002$ | $0.0006\pm0.0001$ | **Q3: Other Comments Or Suggestions** **R3:** Thanks, we will increase the resolution of Figure 1 in the next version. --- Rebuttal Comment 1.1: Comment: 1. Regarding quantification of the randomness of the training process: Thanks for clarifying that you ran your experiments with 5 different seed -- I have likely confused this while reading the manuscript, sorry about that. So this is not an issue then. 2. I appreciate that you include a discussion of the literature regarding sampling, this is helpful. I also understand that you are not resampling but re-weighting. Based on my own experience with training PINNs I know that running in trivial solutions can be a challenging issue. It would be good if you try your method on the convection equation, the example that is in [2] with $\beta=50$ is a good example to observe this. I do not care whether your method is better or worse than the baseline in [2] for this example -- I think it is important to know if it still works for such examples. I understand time might be short to do this within the discussion period, but it can certainly be done before a (possible) camera ready version. --- Reply to Comment 1.1.1: Comment: We truly appreciate the effort and attention you have given to reviewing our manuscript. Your comments were incredibly helpful, and we have taken great care in revising the manuscript to address all of your concerns. To overcome your concerns about the running trivial solutions in our CoPINN, according to your suggestions, we conduct some experiments on the convection equation with $\beta=50$ as described in the literature [1]. The experimental results are recorded as follows. From the results, the performance of our CoPINN is indeed slightly lower than the baseline Causal R3 and slightly higher than the baseline R3. However, our CoPINN significantly outperforms the other compared methods. **This demonstrates that our method remains effective in this scenario, which successfully avoids trivial solutions and provides reliable predictions.** | Method | Relative L2 Error (%) | | ----------- | --------------------- | | Causal PINN | $72.5 \pm 3.82$ | | RAD | $67.1 \pm 1.57$ | | R3 | $1.47 \pm 0.45$ | | Causal R3 | $1.14 \pm 0.11 $ | | Ours | $1.31 \pm 0.63$ | Thank you again for your constructive comments, which further inspired us to study how to avoid the problem of trivial solutions in PINN. Due to the limited time, we will provide more detailed experimental details and more comprehensive experimental results (including the results of our CoPINN on the convection equation with different $\beta$) in the camera-ready version to further support the effectiveness of our method. **We hope our response could solve all of your concerns. If you have any further insights or suggestions, please feel free to share them. Due to the rebuttal mechanism of ICML, we might not be able to provide a response again. However, we will certainly incorporate any constructive feedback into the camera-ready version.** [1]Mitigating Propagation Failures in PINNs using R3 Sampling
Summary: The paper proposes CoPINN, a Cognitive Physics-Informed Neural Network that addresses the Unbalanced Prediction Problem (UPP) in PINNs. UPP arises from treating easy and hard samples (e.g., boundary vs. smooth regions) equally, leading to unstable training. CoPINN introduces three key components: (1) separable subnetworks for encoding 1D coordinates to reduce computational costs, (2) dynamic difficulty evaluation of samples via PDE residual gradients, and (3) a cognitive training scheduler that progressively focuses on harder samples. Experiments on Helmholtz, Klein-Gordon, and Diffusion equations demonstrate state-of-the-art performance, with significant error reductions. The method also shows scalability to high-dimensional PDEs and robustness in boundary regions. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: I checked the correctness of proofs for theoretical claims. The difficulty measure is heuristic, and the linear weight schedule (Eq. 8–9) is intuitive. Experimental Designs Or Analyses: I checked the soundness/validity of the experimental design and analysis. In this paper, a wide range of experiments are carried out on multiple PDEs and dimensions, and the experiments and the analysis of the experiments are sound and valid. Supplementary Material: I reviewed all parts of the supplementary material. The appendices clarify implementation details and validate robustness. Relation To Broader Scientific Literature: The paper builds on PINNs (Raissi et al., 2019) and SPINN (Cho et al., 2023), integrating self-paced learning (Jiang et al., 2015) for difficulty-aware training. Essential References Not Discussed: No relevant works are critical to understanding the main contribution (context) of the paper, but are not currently cited/discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. This paper find the problem of existing PINN through experimental results (equal treatment of samples of different difficulty, resulting in non-optimal performance), and innovatively propose Cognitive PINN, and the experimental results prove the effectiveness of the proposed method. 2. This paper integrates self-paced learning into PINN to effectively solve the unbalanced prediction problem by dynamically prioritizing sample difficulty during training. 3. The proposed method demonstrates scalability to high-dimensional PDEs (e.g., 4D systems), overcoming memory constraints faced by baseline approaches. 4. In this paper, experimental results in different PDEs (Helmholtz, Klein-Gordon, Diffusion) show significant error reduction and robustness in the boundary region. Weaknesses: 1. In this paper, the proposed cognitive scheduler and difficulty evaluation mechanism are not theoretically proved, so the optimality of this method has not been verified. 2. The proposed methods are sensitive to hyperparameters such as β, which presents tuning challenges, especially in balancing attention between simple and hard samples of different PDEs. 3. Progressive weighting of hard samples risks overfitting to localized boundary phenomena at the expense of global solution accuracy, especially in datasets with small difficulty variations. In the method proposed in this paper, progressive weighting of hard samples may overfit local boundary phenomena at the expense of the accuracy of the global solution, which may be more obvious in datasets with small difficulty variations. Other Comments Or Suggestions: This article has some typos, including but not limited to: "Klei-Gordon" → "Klein-Gordon" (Page 14). Questions For Authors: 1. How does CoPINN mitigate overfitting to hard samples in later training stages? In other words, how to ensure that the prediction accuracy of easy samples does not decline in the later training period? 2. Can CoPINN scale to PDEs beyond 4D (e.g., 5D or 6D)? This is crucial for the scalability of the proposed method. 3. Why to use the IMP metric instead of standard relative improvement? This may lead to an overstatement of the experimental results of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their constructive feedback. Below are our responses to the questions raised: **Q1: Questions For Authors(1)** **R1:** Our proposed CoPINN employs a multi-faceted approach to prevent overfitting to hard samples during training. First, the cognitive training scheduler ensures a controlled and gradual transition of focus from easy to hard samples through the hyperparameter $\beta$, which is intentionally set to values less than 0.5 to avoid abrupt weight shifts and maintain a balanced emphasis across regions. Second, the physics-informed loss terms, specifically the initial condition loss $\mathcal L_{ic}$ and boundary condition loss $\mathcal L_{bc}$, act as implicit regularizers by enforcing physical constraints across the entire domain, thereby anchoring predictions to known conditions even as harder samples are prioritized. Finally, the robust generalization has been experimentally validated, with consistently low errors in boundary and smooth regions (e.g., Table 1, Figures 4, 9, and 10 in the original paper), confirming that the model does not overfit to specific challenging regions while maintaining accuracy across the entire solution space. This combination of gradual weighting, physics-based regularization, and experiment validation ensures stable and generalizable training. **Q2: Questions For Authors(2)** **R2:** The separable architecture of our proposed CoPINN inherently supports scalability to higher-dimensional PDEs by encoding each dimension independently through dedicated subnetworks, effectively reducing computational complexity from $O(N^d)$ to $O(dN)$, where $d$ is the number of dimensions and $N$ is the resolution per axis. While our experiments have focused on up to 4D systems (e.g., (3+1)-d Klein-Gordon) due to hardware limitations, the design principles theoretically generalize to higher dimensions. For instance, in 5D/6D scenarios, techniques like tensor decomposition (e.g., CP or Tucker formats) can optimize the aggregation of outer products in Equation 3, further enhancing efficiency. Our future work will explore these optimizations to validate scalability in extreme dimensions while maintaining the model’s accuracy and computational feasibility. **Q3: Questions For Authors(3)** **R3:** We agree that clarity in evaluation metrics is critical. The IMP metric in our paper is calculated as: $IMP=\frac{|u_s-u_b|}{u_s}\times 100\\%$ where $u_s$ is the error of the suboptimal baseline and $u_b$ is the error of CoPINN. Since $u_b<u_s$ in all experiments (as CoPINN outperforms baselines), it is mathematically equivalent to the standard relative improvement formula. To eliminate ambiguity, we will revise the formula in the final version to: $$IMP=\frac{u_s-u_b}{u_s} \times 100 \\%$$ removing the absolute value. This aligns with standard practice and ensures no overstatement. For example: In Table 1, a baseline error of $(0.0311)$ SPINN(m) vs. CoPINN’s $(0.0006)$ gives $IMP=\frac{0.0311-0.0006}{0.0311}\times 100\\%\approx 98\\%$, which correctly reflects the relative error reduction. In addition, we have explicitly reported both relative $L_2$ errors (e.g., Helmholtz's CoPINN: $0.0006$ vs. SPINN(m): $0.0311$), $RMSE$, and $IMP$ in Table 1 of the original paper. The performance is comprehensively demonstrated through two metrics (relative $L2$​ and $RMSE$), with the improvement magnitude illustrated via $IMP$.  We will clarify this in the text. **Q4: Other Comments Or Suggestions** **R4:** We have carefully corrected the typos in the paper: "Klei-Gordon" → "Klein-Gordon" (Page 14); "$u(x,0)=x_1+x_2, x \in \Omega, x \in \Omega$"→"$u(x,0)=x_1+x_2, x \in \Omega$" (Page 12). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. After reviewing their responses and considering the comments of other reviewers, I believe the authors have addressed my concerns effectively. In summary, the paper is well-organized and clearly written, with the figures aiding readers in understanding the algorithmic motivation effectively. Inspired by human cognitive learning, the proposed CoPINN is the first work to leverage self-paced learning to enhance the PINN performance in difficult regions. The technical solution is novel and reasonable. Extensive experiments are provided to make it easy to understand the contribution of the proposed method and the effectiveness of the results. Thus, I would keep my score to support my acceptance recommendation. --- Reply to Comment 1.1.1: Comment: Thanks for your support. Based on your suggestions, we will further improve the quality of our manuscript in the final version.
Summary: The authors look at the PINNs setting, based on training a neural network to confirm to the PDE residual. They employ a method that dynamically samples collocation points according to the gradient of the PDE residual. They use this as a signal to do PINNs training starting with the solution on the collocation points that are easy to learn, and then move to the harder to learn regimes. They demonstrate this on four different PDEs. Claims And Evidence: The authors claim that the three categories of PINNs methods include designing the architecture of PINNs, changing the loss function, and changing the weights on the loss function for PINNs. The authors claim that their method works the best compared to various other PINNs methods on the four different PDEs. However, they are missing a very important line of work that does adaptive sampling of the collocation points of PINNs, which is similar to what the authors are doing. For example, [1]. [1] Wu et al. A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 403, 115671, 2023. Methods And Evaluation Criteria: The method the authors employ is to track gradients, which is a proxy for the difficulty of learning the solution on each collocation point. They start with the model putting more weight on the easier to learn regions of the solution space, and then employ a scheduler that gradually then starts to put more weight on the harder to learn regimes. This is done in a dynamic fashion over the course of training. As far as the evaluation, this is done by computing L2 relative error with respect to a reference solution, as well as looking at the RMSE. However, for problems that are not the diffusion problem, it is not stated exactly what the L2 relative error is compared to: what is the reference solution that the predicted solution is compared to, and how was this data generated? Theoretical Claims: There are no theoretical results. Experimental Designs Or Analyses: The authors set up 4 PDE problems to solve. They analyze this based on L2 relative error and RMSE. See above comment for questions on how reference solution data is generated and compared to. As mentioned earlier, there isn’t a mention or comparison of the many adaptive sampling PINNs papers, such as [1]. It seems like [2] also looks at the Helmholtz problem (different source terms) through changing the loss function, and seems to mitigate this issue through loss reweighting. There are also lines of work that aims to address some of the issues the authors note, such as certain regimes being harder to learn, through imposing hard constraint [3] [4]. [1] Wu et al. A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 403, 115671, 2023. [2] Wang et al. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing (2021) [3] Lu et al. Physics-Informed Neural Networks with Hard Constraints for Inverse Design. CMAME (2021) [4] Chalapathi et al. Scaling physics-informed hard constraints with mixture-of-experts. ICLR (2024) Supplementary Material: I read through all of the supplementary material, which includes more details on the different PDEs, additional results, and related work. It did not answer some of the questions I had. Relation To Broader Scientific Literature: This work is part of the vast, and very well-studied, literature on PINNs and ML for solving PDEs. It proposes another method to improve the training of PINNs. Given the amount of literature in this space, there is a lot more that can be done to position this work in the broader context of the field. Essential References Not Discussed: This work does not discuss a broad set of PINNs literature that has focused on “adaptive sampling” techniques where different parts of the domain or different points are weighted differently, and more weight is given to points that have higher PDE residuals. Such as [1] [1] Wu et al. A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 403, 115671, 2023. Other Strengths And Weaknesses: See above. In general the bar for new PINNs methods is high, as this area has been well-studied for a number of years and there are many new methods. This method doesn’t seem to be that different from many other methods, and so any proof-of-concept should also be on harder problems. Other Comments Or Suggestions: See above. Questions For Authors: My questions are interspersed above, and some of it is summarized here: - For problems that are not the diffusion problem, it is not stated exactly what the L2 relative error is compared to: what is the reference solution that the predicted solution is compared to, and how was this data generated? - This work is missing any discussion and comparison of a long line of work in doing adaptive sampling, where different parts of the domain are weighted differently (typically where PDE residuals are higher, more weight is put on this). This work is very related to what the authors are proposing. How do such methods compare? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their constructive feedback. Below are our responses to the questions raised: **Q1: Methods And Evaluation Criteria & Questions For Authors** **R1:** The reference solutions refer to the labels used to compute the relative $L_2$ error and $RMSE$ by comparing them with the model's predicted solutions. Following [1-3], for the Diffusion equation, reference solutions are obtained through the widely-used PDE solver platform FEniCS at a resolution of 101×101×101. For the Helmholtz and Klein-Gordon equations (3D and 4D), reference solutions with 100×100×100 resolution are independently designed and manufactured respectively. **Q2: Claims And Evidence & Experimental Designs Or Analyses & Essential References Not Discussed & Questions For Authors** **R2:** According to your suggestion, we will discuss adaptive sampling in the related work section. We acknowledge that existing work in the field of adaptive sampling, such as RAD/RAR-D [4] provides valuable insights relevant to our study. However, CoPINN fundamentally differs from these existing adaptive sampling methods: 1. Existing adaptive sampling methods like RAD/RAR-D [4] primarily focus on the global residual distribution, improving overall accuracy by resampling or adding points to the region near the high-residual points, but they ignore the Unbalanced Prediction Problem (UPP) of PINN. In contrast, the core objective of CoPINN is to resolve the UPP commonly observed in traditional PINNs near physical boundaries. We find that relying solely on absolute residual values (e.g., as in RAD) may overlook the high-gradient nature of boundary regions, leading to local overfitting or underfitting. Therefore, CoPINN introduces a residual gradient-based dynamic difficulty assessment to more accurately identify abrupt changes in boundary regions (e.g., shocks and singularities). 2. Existing adaptive methods (e.g., RAR-D) employ a greedy strategy to incrementally add high-residual points but do not consider the model’s optimization capability at different training stages. To address this, CoPINN incorporates a cognitive training scheduler, which prioritizes simpler regions (low-gradient areas) in the early stages to stabilize training, while progressively increasing the weight of more complex regions (high-gradient areas) in later stages. This prevents the model from prematurely converging to suboptimal local solutions. **Q3: Experimental Designs Or Analyses** **R3:** The reviewer notes that prior work [5] addresses similar issues via loss reweighting. However, our CoPINN fundamentally differs in both methodology and scope. [5] focuses on loss-term balancing (e.g., PDE residual vs. boundary loss) without considering spatial/temporal variations in sample difficulty. In contrast, inspired by the human curriculum learning strategy of progressing from easy to hard tasks, CoPINN introduces a cognitive learning approach. It evaluates sample-level difficulty (via PDE residual gradients) and progressively prioritizes harder regions during training. CoPINN is finer-grained, addressing intra-term imbalances (e.g., stubborn points vs. smooth regions). **Q4: Experimental Designs Or Analyses** **R4:** There are two key differences between the approach with hard constraints [6-7] and our proposed CoPINN. 1. [6-7] focus on inverse problems (e.g., geometric parameter optimization), with the core objective of simultaneously satisfying PDE constraints and optimization objectives. In contrast, CoPINN targets the Unbalanced Prediction Problem (UPP) in forward PDE solving, aiming to address error caused by varying sample difficulties in PINN training. Their application scenarios are fundamentally distinct. 2. [6-7] employ fixed hard constraints (e.g., modified network architectures) and optimization strategies to ensure strict adherence to physical properties during the prediction process, enhancing accuracy through the incorporation of additional constraints. CoPINN introduces a dynamic cognitive scheduler that evaluates sample difficulty via PDE residual gradients and progressively adjusts training weights from easy to hard samples, constituting a data-driven adaptive learning paradigm. This training strategy, inspired by human cognitive processes, remains unexplored in the PINN domain. [1] Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. [2] Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. [3] Separable physics-informed neural networks. [4]A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks [5]Understanding and mitigating gradient flow pathologies in physics-informed neural networks. [6]Physics-informed neural networks with hard constraints for inverse design. [7]Scaling physics-informed hard constraints with mixture-of-experts. --- Rebuttal Comment 1.1: Comment: Thank you for the response. This response doesn't address some of my key concerns including comparisons to some of these important and related areas, as well as studies on harder problems (many of the problems studied in this paper have been previously well-studied by PINNs approaches, as was mentioned in my original comment). Showing both clear comparisons of this approach vs. the others, as well as analysis on if and why this approach indeed works better (how does the learning change, where error ends up concentrating) would make this paper stronger. As it stands now, I don't think this paper stands out that much from the vast literature on PINNs and methods to make PINNs converge better. --- Reply to Comment 1.1.1: Comment: We appreciate your helpful comments and will add the comparison and analysis to the next version of our paper, and make comprehensive revisions based on the above important discussions. Thanks again for your valuable suggestions and comments. **Q1: Comparisons to some of these important and related areas, as well as studies on harder problems.** We first emphasize that the proposed CoPINN is fundamentally different from these related approaches [1-4] in both the technical framework and research problem. **(1) In terms of the framework,** we propose a novel cognitive PINN method, which first prioritizes more on easy samples and gradually focuses on more challenging samples, thereby enhancing the model’s generalization to difficult samples. To the best of our knowledge, our CoPINN could be the first work that leverages self-paced learning to enhance the PINN performance in difficult regions. **In addition, the novelty of our approach is also recognized by the other three reviewers.** **(2) In terms of the research problem,** although some adaptive sampling methods address the so-called harder problem, they cannot handle the unbalanced prediction problem under the limited data conditions. Moreover, when facing high-dimensional PDEs, they suffer from high computational costs. **Therefore, the problem studied in this paper has not been well-studied by these previous PINN approaches.** Then, **to better clarify the differences between CoPINN and reference methods,** we provide the following detailed description, which will be updated in our final version. (1) Typical adaptive methods such as RAD and RAR-D [1] require dynamic sampling or addition of new data points during training, particularly in high-residual regions. These methods become inapplicable when data cannot be updated. For instance, when only fixed predefined points are available, their core premise relies on optimizing training through sampling distribution adjustments. Moreover, adaptive sampling techniques face that the number of required sampling points grows exponentially with dimensionality, rendering dynamic distribution adjustments computationally prohibitive. This limitation explains why such methods have only been tested on up to 2D datasets. In contrast, CoPINN optimizes the model by dynamically adjusting sample weights without altering data point locations or quantities. This enables performance improvements even with non-updatable fixed data points. Furthermore, our CoPINN proposes Separable Learning to reduce cross-dimensional computational coupling and avoid high-dimensional tensors, thereby significantly decreasing parameter counts and complexity. Thus, our CoPINN could be more effectively extended to high-dimensional spaces. (2) Loss reweighting methods [2] perform coarse adjustments at the loss-term level, failing to address prediction imbalances caused by per-sample difficulty variations. While they balance gradients across different loss terms through global weights, they neglect local heterogeneity among samples within the same loss term. Compared to this, CoPINN achieves fine-grained optimization through sample-level difficulty assessment and dynamic weight scheduling, thereby mining more information from complex regions such as abrupt transition zones. (3) In physical systems, certain regions like shock waves and singularities exhibit drastic variations in physical quantities, resulting in large PDE residual gradients and heightened learning challenges, while smoother regions remain relatively simple. Hard-constrained methods [3-4] uniformly process all samples without dynamic prioritization adjustments, leading to suboptimal performance in challenging areas. By contrast, CoPINN quantifies sample difficulty using PDE residual gradients to identify stubborn regions. Through cognitive training scheduling, it first assigns higher weights to simpler samples to stabilize training, then gradually increases weights for harder samples to optimize performance in challenging zones. **Q2: How does the learning change?** The learning process of our CoPINN is as follows. During each epoch, CoPINN first dynamically evaluates each sample's difficulty based on gradients of PDE residuals. Then, we adaptively assign greater weights to easy samples in the early stage of training, and assign greater weights to difficult samples in the later stage of training (see Fig.3(c)). This learning scheme gradually shifts the neural network’s focus from simpler samples to more challenging ones. **Q3: Where error ends up concentrating?** To better clarify where the errors of CoPINN ultimately concentrate, you can refer to Figures 1, 4, 9, and 10 in the original paper. These figures demonstrate that, compared to baselines, CoPINN successfully reduces prediction errors in stubborn regions (areas with abrupt changes) and eliminates observable error concentration zones. Therefore, at the end of training, the errors of our CoPINN are not concentrated.
null
null
null
null
null
null
Geometry Informed Tokenization of Molecules for Language Model Generation
Accept (poster)
Summary: This paper proposes Geo2Seq, a 3D molecule tokenization method for 3D molecular generation. The authors convert molecules (in 3D space) to 1D sequences while preserving SE(3) invariance, and then train a molecule generative model based on language model architecture. Geo2Seq equipped with various language models show superior performance in molecule generation tasks. ## update after rebuttal I confirm that I have read the rebuttal and finalized my evaluation. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. Theoretical Claims: The theoretical claims seem correct for me (at least at a high level). Experimental Designs Or Analyses: I checked the soundness/validity of the experimental designs or analyses. Supplementary Material: I checked the supplementary material (especially additional experiments). Relation To Broader Scientific Literature: The key contribution of this paper is to relate 3D molecule generation and language models. Previous 3D molecule generation relies on 3D graph generation technique, however, such graph generation tasks show poor performance due to the under-explored 3D graph architectures. This paper shows such drawbacks can be alleviated by tokenizing 3D molecules to 1D sequences, which can be directly incorporated with well-developed language models. Essential References Not Discussed: As far as I know, there exist some molecule 3D molecule tokenizing methods, e.g., 3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization [Pei et al., 2024] and Tokenizing 3D Molecule Structure with Quantized Spherical Coordinates [Gao et al., 2024], which can be discussed in this paper. Other Strengths And Weaknesses: Strengths 1. The problem of interest, 3D molecule tokenization, is important for molecular domain and poses a potential for real-world applications, e.g., drug discovery. 2. Proposed method seems reasonable; tokenizing 3D spherical coordinates in an SE(3)-invariant manner. 3. The improvements in controllable generation are impressive; Geo2Seq highly outperforms previous 3D graph-based molecule generation techniques. Weaknesses 1. Scalability of Geo2Seq. Compared to graph-based models (which accepts continuous values), discretization technique in Geo2Seq may limit the performance when the molecules become large. The results in Table 1 also show that Geo2Seq works well on smaller molecules (QM9) but not quite well on larger molecules (GEOM). Other Comments Or Suggestions: Comments 1. Representation can be further improved. Essential experimental ablations are deferred to the supplements, e.g., Table 4. I think the method section can be shortened, e.g., discussion about spherical coordinates. Typos L46, right column: "subsequent LMs used. and can seamlessly" -> "subsequent LMs used, and can seamlessly"\ L194, left column: "can be be proved" -> "can be proved" Questions For Authors: 1. Did the authors train GPT and Mamba from scratch or pre-trained checkpoints? 2. Can Geo2Seq be applied to text-to-molecule generation tasks, e.g., ChEBI-20 dataset? Such results will further highlight the effectiveness of using language models. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer CPUN, Thank you for your appreciation of our work and insightful comments! We have made efforts to thoroughly improve our work accordingly and provide responses for each concern here. Please also refer to our added experiments in [this Link](https://anonymous.4open.science/r/geo2seq-rebuttal/Geo2Seq_rebuttal.pdf) and our responses to other reviewers. > Some existing molecule 3D molecule tokenizing methods - Thank you for the advice. We have included the discussions in the paper revision and also briefly discuss here. - `3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization`: 3D-MolT5 focuses on text-based molecule-related downstream tasks, including molecular property prediction, 3D molecule captioning, and text-based molecule generation. It does not consider 3D molecular generation tasks. thus we do not adopt it as a baseline method. 3D-MolT5 is designed to handle 3D-dependent tasks with text instructions using the T5 model. Its tokenization method is based on the Extended 3D Fingerprint (E3FP) algorithm, where the embeddings of the same atom in both 1D and 3D tokens are summed to form the final joint representation. - `Tokenizing 3D Molecule Structure with Quantized Spherical Coordinates`: This is a concurrent work with our submission, submitted to arXiv in Dec 2024, thus we do not adopt it as a baseline method. This work uses SMILES and coordinates to build sequences and a VQ-VAE model to discretize the continuous coordinates. Compared to our method, this work uses VQ-VAE to learn structure tokens, which lacks guarantee of structural completeness. - In summary, our method differs by (1) extending canonical labeling to encode 3D structural isomorphism, (2) enabling reversibility between sequences and 3D isomorphic structures, and (3) establishing theoretical guarantees of structural completeness and geometric invariance. We believe our formulation complements theirs, and we thank the Reviewer for encouraging this discussion. > W1: Scalability of Geo2Seq - Thank you for the point. This is an interesting question we have been thinking as well. Indeed, discretization could impose limits on very large molecules due to increased sequence length and reduced resolution. However, on GEOM-DRUGS dataset, Geo2Seq still achieves competitive performances. Moreover, theoretically, if we use larger vocabulary size, the discretization would not be a limitation for scalability. Larger molecules only bring longer context length of upto 750, which can be handled given the capability of LLMs. - To our understanding, the reason for the suboptimal performances on GEOM-DRUGS could be that the size of the GEOM-DRUGS dataset requires larger LMs than what we are using. The QM9 data size is ~100k and we are using ~90M parameter LMs, while GEOM ~7M and we are using ~100M parameter LMs. While this benefits efficiency, GEOM-DRUGS might perform optimally with larger model sizes. Due to the time limit of the rebuttal, we will explore further improving the performance on GEOM-DRUGS dataset with larger LLMs in the future. > Paper Representation - Thank you for the suggestion! We have shortened the discussion in Sec.3 and moved essential experimental ablations including Table 4 to the main text. > Typos - Thank you for pointing out! We have revised the paper and corrected the typos. > Q1: train from scratch or pre-trained? - We train from scratch. We propose a molecular tokenization method different from the NLP tokenization used by GPT and Mamba. Pre-trained checkpoints need to be used together with the pre-training tokenizer, thus not applicable in our setting. In addition, those checkpoints include little molecular 3D structural knowledge, thus not suitable for our molecular tasks. > Q2: text-to-molecule generation tasks? - Thank you for your insightful comments! We appreciate your suggestion to explore the interesting applicability of our Geo2Seq towards text-molecule tasks. Indeed, Geo2Seq be applied to text-to-molecule generation tasks. The main difference would be extending the tokenization with for text, which can be enabled with the BPE or SentencePiece tokenizer. On the other hand, text-to-molecule generation tasks poses a different setting with various baselines such as MolT5 and LDMol. We believe this opens a promising future direction and we will include the discussion in the paper revision. Within the time limit of the rebuttal, we focus on the field of 3D molecule generation in this work and leave this exploration for future work. We sincerely thank you for your time! Hope we have addressed your concerns through practical efforts and shown the contributions and significance of our work. We look forward to your reply and further discussions, thanks! Sincerely, Authors --- Rebuttal Comment 1.1: Comment: Thank you for the response. At this moment, I do not have further questions, and I lean towards acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer CPUN, Thank you for your acknowledgment and response. We are grateful for your appreciation of our work and glad to know that we have resolved all your questions. We sincerely thank you for your time and efforts! Sincerely, Authors
Summary: This paper explores the use of language models (LMs) for generating 3D molecules, a task that has previously been challenging due to the complex geometric structure of molecules. The paper proposes a novel tokenization method called Geo2Seq, which converts 3D molecular structures into SE(3)-invariant 1D discrete sequences that can be effectively processed by LMs. Results show that with various state-of-the-art 3D molecule generation methods, including diffusion-based models like EDM and GEOLDM. Geo2Seq achieves comparable or better results in terms of atom stability, molecule stability, and valid percentage, especially in controlled generation tasks. Claims And Evidence: 1. Generalizability to Continuous Domains: The paper acknowledges the limitations of the discrete tokenization approach in terms of generalization to the continuous domain of real numbers. However, the analysis provided is limited, and more evidence in various datasets is needed to demonstrate the extent of this limitation and potential solutions. 2. Uniqueness of Generated Molecules: The paper reports a lower uniqueness percentage for Geo2Seq compared to diffusion-based methods on the QM9 dataset. This could be due to several factors, including the tokenization approach and the size of the dataset. Further investigation is needed to understand the underlying reasons and explore ways to improve uniqueness. 3. Error Case Analysis: The provided error case analysis is limited and focuses on specific examples. A more comprehensive analysis of different types of errors (e.g., syntax errors, repetition, hallucinations) and their causes is needed to better understand the robustness and limitations of the approach. Methods And Evaluation Criteria: 1. Geo2Seq Tokenization: The use of canonical labeling and invariant spherical representations is a reasonable approach for converting 3D molecular structures into a format suitable for LMs. However, the paper lacks a comprehensive comparison with alternative tokenization methods, such as graph-based representations directly used with graph neural networks. A more thorough comparison would provide a better understanding of the advantages and limitations of Geo2Seq. 2. Language Models: The choice of GPT and Mamba as LMs is reasonable, given their strong sequence modeling capabilities. However, the paper does not explore the potential of larger LMs, like llama and qwen. 3. Evaluation Metrics: The paper focuses on atom stability, molecule stability, and valid percentage as primary evaluation metrics. While these metrics are important, they do not fully capture the quality and diversity of the generated molecules. Consider incorporating additional metrics, such as novelty, structural diversity, and property prediction accuracy, to provide a more comprehensive evaluation. Theoretical Claims: 1. Validity of the 3D Graph Isomorphism Definition: The paper extends the concept of graph isomorphism to 3D graphs, which is not a standard definition. While the paper provides a definition, it is crucial to establish the validity and soundness of this definition in the context of 3D molecular structures. The proof should clearly justify the extension and demonstrate its consistency with existing graph theory concepts. 2. Completeness of the Proof: The proof in the Appendix seems to focus on demonstrating the sufficiency of the Geo2Seq mapping (i.e., if two molecules are 3D isomorphic, their sequences are identical). However, it does not explicitly address the necessity (i.e., if two molecules have the same sequence, they must be 3D isomorphic). This is a crucial aspect of a bijective mapping, and a more comprehensive proof is needed to establish both sufficiency and necessity. Experimental Designs Or Analyses: 1. Comparison with Alternative Tokenization Methods: The paper primarily focuses on comparing Geo2Seq with state-of-the-art 3D point cloud based methods. However, it lacks a comprehensive comparison with alternative tokenization methods, such as graph-based representations directly used with graph neural networks. This limits the understanding of the advantages and limitations of Geo2Seq compared to other approaches. 2. Additional Metrics: The evaluation focuses on atom stability, molecule stability, and valid percentage. While these metrics are important, they do not fully capture the quality and diversity of the generated molecules. Incorporating additional metrics, such as novelty, structural diversity, and property prediction accuracy, would provide a more comprehensive evaluation of the generated molecules. 3. Comparison with Pre-trained Models: The paper compares the performance of Geo2Seq with models that are not pre-trained on chemical datasets. Exploring the impact of pre-training on the performance of Geo2Seq would be valuable for understanding its effectiveness in leveraging knowledge from large chemical databases. Supplementary Material: Yes. I have checked the Appendix. Relation To Broader Scientific Literature: Yes. By applying the proposed methods to generate a more accurate 3D coordinates of molecules, researchers could better find appropriate drug candidates. Essential References Not Discussed: As far as I know, there is no essential reference not discussed. Other Strengths And Weaknesses: #### Strengths: 1. Originality: The paper presents a novel approach to 3D molecule generation using language models, which is a relatively unexplored area. The use of Geo2Seq for converting 3D molecular structures into 1D sequences is a creative combination of ideas from graph theory and language modeling. 2. Potential Impact: The approach has the potential to significantly impact the field of 3D molecule generation, particularly in drug discovery and materials science. The ability to efficiently generate valid and diverse 3D molecules with desired properties could revolutionize these fields and enable the discovery of new drugs and materials. 3. Clarity: The paper is generally well-written and provides a clear explanation of the approach and its benefits. The figures and tables are helpful in illustrating the key concepts and results. #### Weaknesses: 1. Limited Comparison with Alternative Methods: The paper primarily focuses on comparing Geo2Seq with state-of-the-art 3D point cloud based methods. A more comprehensive comparison with alternative tokenization methods and graph-based approaches would provide a better understanding of the advantages and limitations of Geo2Seq. 2. Limited Evaluation of Controlled Generation: The evaluation of controlled generation tasks is limited to specific quantum property values. Exploring the generalizability and robustness of controlled generation across different conditions would be valuable. 3. Limited Discussion of Limitations: The paper acknowledges the limitations of the discrete tokenization approach but does not provide a detailed analysis of these limitations and potential solutions. A more thorough discussion of the limitations and their impact on the performance would be beneficial. Other Comments Or Suggestions: Figure Captions: Some figure captions could be more informative and provide a clearer explanation of the content. For example, Figure 2 would benefit from a more detailed description of the equivariant frame and invariant spherical representations. Questions For Authors: Please see the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer aNgc, Thank you for your appreciation of our work and insightful comments! We address each point here and added experiments are in [this Link](https://anonymous.4open.science/r/geo2seq-rebuttal/Geo2Seq_rebuttal.pdf). > Real number generalizability We have conducted more studies on this point. - LMs require tokenization which limits number resolution. To mitigate this, we study flexible number-tokens and show even with coarse discretization, our model maintains strong performance. Please see experiments in `[Link] Table 1`. - We also analyze different discretization schemes. See experiments in `Appx C Table 5` and `[Link] Table 2`. - We can extend to additional datasets in future work. > Uniqueness of QM9 - For the uniqueness of QM9, we believe it is because conversion from real numbers to discrete tokens limits 3D search space, **especially on small datasets like QM9**. Evidence is that **we achieves 99.77% uniqueness on GEOM-DRUGS**. This reflects that richer database or vocabulary **enlarge search space of 3D structures and enhance uniqueness**. - Moreover, following EDM, we emphasize validity/stability, thus setting temperature=0.7. This can be adjusted for validity-diversity trade-off. We can easily improve uniqueness with larger temperature (e.g., temp=1.0 enhances uniqueness from 81.7% to 86.5%). > Error Case - We now include error case studies. Please see `[Link] Table 7`. - We also conduct quantitative studies. Among 100 error cases, 61% stem from incorrect distance-angle combinations, 25% geometric inconsistencies, and 14% subsequences repetition. Errors are rarer if model well converges: when trained for 150 and 250 epochs, the model generates ~15% and <2% invalid samples, respectively. > Comparison with graph-based methods - Previously we focus on comparing with SoTA methods. Here we compare with more graph-based methods, as in `[Link] Table 3`. - Meanwhile, we clarify that our baselines also directly use GNN representations, where GNNs are backbones while other techniques enable generation. E-NF is based on E(n)-GNN, G-Schnet uses GNN SchNet, EDM and GEOLDM parametrized by EGNN, and GDM by non-equivariant MPNN. > LMs like llama/qwen - We provide experiments extending Geo2Seq with LLaMA. Please see `[Link] Table 4` and `our Response to Reviewer 39Ws`. > More Metrics Thanks for the advice. We already include these metrics in Appx. D. - `Table 6, 7, 8, 9, 10, 11 in Appx. D` report evaluations **including novelty, structural diversity metrics, distribution distance metrics, etc**. Property prediction accuracy is as `Table 2 of the paper`. **Vast results** show we can outperform other methods across various metrics. > Def 3.4 Validity We clarify its soundness regarding theoretical alignment with graph theory and physical relevance in molecular modeling. - Our definition builds upon colored graph isomorphism. Lemma 3.2 show attributed CL retains bijectivity and different-attributed molecules cannot be conflated under this formulation. This is consistent with graph matching literature. We extend to 3D graphs requiring valid SE(3) transformation mapping coordinate matrices. It is formalized in Def 3.4 and supported by Lemma 3.3 & Thm 3.5. Our definition corresponds to label- and coordinate-preserving isomorphism. - To further substantiate, it aligns with physical indistinguishability in chemistry, where two conformers differing only by spatial orientation are considered identical. While our formulation is novel, it mirrors physical practices in geometric deep learning, which treat structures up to SE(3) equivalence. > Proof Completeness - We clarify that our proofs do include sufficiency & necessity explicitly. Thm 3.5: lines 883-899 prove sufficiency, and lines 900-928 necessity. Cor 3.6: lines 967-987 sufficiency and 988-1021 necessity. > Pretraining - We clarify that we do explore the impact of pretraining. See `Appx D.5 and Table 13`. > Limited Controlled Generation - We focus on quantum properties because they are available conditions relating 3D information. Here we provide more experiments exploring generalizability across conditions. - The first experiment studies generalization across multiple conditions (see `[Link] Table 5`). The second experiment tests generalization to unseen conditions (see `[Link] Table 6`). While generalization brings challenges, we can capture certain multi-condition knowledge with robustness. > Limited Limitations - We now include a detailed limitations section. Key challenges include discretization loss, high-precision 3D geometry generalization, and solutions include tokenizer learning with continuous embeddings or vector quantized codes. > Figure Captions - We have included more informative captions. For Figure 2, see `[Link] Figure` for updated caption. Thank you for your time! Hope we have addressed your concerns with practical efforts and shown our work’s significance. We look forward to further discussions. Sincerely, Authors
Summary: The paper proposes a method called Geo2Seq to generate 3D molecules using language models. The authors convert each molecule into an SE(3)-invariant discrete sequence of tokens—one token per atom, with tokens containing both atom type and spherical-coordinate information. Once converted to a sequence, any language model can be trained to produce new molecular sequences, which are then mapped back to 3D structures. Claims And Evidence: Much of the theoretical proof (canonical labeling correctness, SE(3)-invariant spherical coordinates) is already known from prior literature on graph isomorphism and spherical transformations. The paper does not fully demonstrate how they guarantee stronger empirical generation beyond giving a valid, lossless tokenization. The paper would benefit from more ablation: e.g., do we see the same advantage if we do a simpler coordinate encoding? Methods And Evaluation Criteria: The tokenization for unconditional and property-conditional generation does make sense: the method inserts property tokens into the same sequence, so an LM can learn to produce geometries with certain property values. Theoretical Claims: No obvious errors stand out. Experimental Designs Or Analyses: I believe it remains unclear whether the LM is capturing fundamental 3D chemistry knowledge or if it is mostly reproducing token patterns it has memorized. More analyses, like measuring internal geometry consistency or testing on molecules that deviate strongly from training data, would strengthen the claims. Supplementary Material: Yes, I have checked all additional experiments. Relation To Broader Scientific Literature: The paper was built based on prior works on 3D generation and proposed a geometry-aware tokenization to do generation tasks. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The proposed method is model-agnostic and requires no special modifications to model architectures. 2. The method preserves SE(3)-invariance. 2. The paper is well-written and easy to follow. 3. The paper conducts comprehensive experiments. Other Comments Or Suggestions: NA Questions For Authors: Please refer to other sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer hgAt, Thank you for your appreciation of our work and insightful comments! We have made efforts to thoroughly improve our work accordingly and provide responses for each concern here. Please also refer to our added experiments in [this Link](https://anonymous.4open.science/r/geo2seq-rebuttal/Geo2Seq_rebuttal.pdf) and our responses to other reviewers. > Claims And Evidence - Thanks for your valuable comments. Beyond standing as a valid lossless tokenization, Geo2Seq has SE(3)-invariance and adopt spherical design, which can significantly benefit in achieving stronger empirical generation. We have conducted comprehensive ablation studies here. We explain in detail below. - In `Ablation on 3D representation of Appx. C`, we explore using **simpler or different coordinate encodings** to represent 3D molecular structures. We compare the spherical coordinates in Geo2Seq with directly using the 3D Cartesian coordinates from xyz data files. We also study whether normalizing the xyz coordinates is effective by subtracting the xyz coordinates with the mass-center coordinates of each molecule. Additionally, we compare with using the SE(3)-invariant Cartesian coordinates that are projected to the equivariant frame proposed in Section 3.2. We also explore adopting to manage distances in a more local scheme, which reduces the scale of the distances. - Results in `Table 4 of Appx. C` demonstrate that LLMs achieve **the best performance on spherical coordinates**, showing the superiority of invariant spherical coordinates over invariant Cartesian coordinates. We believe this is due to that the numerical values of distances and angles of spherical coordinates lie in a smaller region than coordinates, which reduces outliers and makes it easier for LLMs to capture their correlation. From these empirical results, we can analyze that the representation of azimuth and polar angles has brought sufficient advantage for LM learning over Cartesian coordinates, thus spherical representations with both distance schemes are showing promising performances. - In Sec 3.2, we discuss the advantage of spherical coordinates. Compared to Cartesian coordinates, spherical coordinate values are bounded in a smaller region, namely, a range of $[0,\pi]$/$[0,2\pi]$. Given the same decimal place constraints, spherical coordinates require a smaller vocabulary size, and given the same vocabulary size, spherical coordinates present less information loss. This makes spherical coordinates advantageous in discretized representations and thus easier to be modeled by LMs. > Experimental Designs Or Analyses Thanks for your comment. This is an important point and we have conducted various experiments and analyses including measuring geometry consistency and novelty to verify the capabilities of the method. - `Table 6 and 7` reports further random/controllable generation results including **novelty** metrics. Results show that our method achieves reasonably high **novelty** scores, which demonstrates that our method is **not simply memorizing or reproducing token patterns**. - Also, experiments indicate that our generated molecules satisfy the internal geometry consistencies well. In addition to our main validity/stability results, we provided more evaluation results on various chemical metrics in `Appx. D, Table 6, 7, 8, 9, 10, and 11`, **including diversity metrics, distribution distance metrics, bond lengths and angles, reasonable internal energy, steric hindrance, etc**. **Vast results** show that we can outperform existing methods across metrics regarding various chemical constraints and geometry consistencies. - In addition, in `Appendix F.2`, we provided UMAP visualizations of learned (atom type, distance, and angle) token embeddings, which indicates that the model has successfully learned structure information in 3D space. Figure 8 shows similar angle tokens (e.g., 1.41° and 1.42°) are placed next to each other, overall structure of all angles is a loop, and π-out-of-phase angles (3.14°, -3.14°, and 0°) are placed near each other. For atom type tokens, the model appears to capture the structure of the periodic table. - We believe the current results are sufficient to prove the correctness and capabilities of our design. Geo2Seq with LMs can model 3D molecular structure distribution and capture the underlying chemical rules. We sincerely thank you for your time! Hope we have addressed your concerns through practical efforts and shown the contributions and significance of our work. We look forward to your reply and further discussions, thanks! Sincerely, Authors
Summary: The paper proposes Geo2Seq, which transforms molecular geometries into SE(3)-invariant discrete sequences for molecule generation. Existing language model-based molecule generation works do not consider the 3D molecular geometries in the tokenization process. The proposed paper address this limitations and show that the proposed Geo2Seq improves the performance in molecule gnerations. Claims And Evidence: The paper claims that the tokenization with preserving 3d molecular graph information improves the quality of molecule generation. The claim is well supported by theoretical analysis and experimental results. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem from my perspective. Theoretical Claims: I've checked the correctness of the proofs in the paper and it seems to be correct from my side. Experimental Designs Or Analyses: The experimental designs and analyses are valid. Supplementary Material: I've checked the appendix. Relation To Broader Scientific Literature: The paper seems novel to me. Even though a recent paper proposes tokenization techniques considering 3D molecular geometric information, it limits its scope to text generation. Different from it, Geo2Seq is designed to generate molecules with theoretical analysis. So, I think it is novel. Essential References Not Discussed: I think that the essential references are discussed in this paper. Other Strengths And Weaknesses: Strengths - The paper is well written and easy to follow. - The paper well demonstrates the effectiveness of the proposed method Geo2Seq with theoretical analysis and experimental results. - The visualization map in Figure 6 is interesting to me. Weaknesses - I cannot find crucial weaknesses in this paper. Other Comments Or Suggestions: - I'm wondering the performance of the proposed Geo2Seq with larger or recent language models such as LLaMa. Questions For Authors: Please refer the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 39Ws, Thank you for your appreciation of our work and insightful comments! We have made efforts to thoroughly improve our work accordingly and provide responses for each concern here. Please also refer to our added experiments in [this Link](https://anonymous.4open.science/r/geo2seq-rebuttal/Geo2Seq_rebuttal.pdf) and our responses to other reviewers. > Geo2Seq with larger or recent language models such as LLaMa - Thanks for your valuable comments. We would like to provide more experimental results extending Geo2Seq to LLaMA. Due to the limited time of the rebuttal, here we provide the results of Geo2Seq with LLaMA for the controllable generation task, which have a smaller training data size of 50K. We will update the results of Geo2Seq with larger LLaMA/Qwen models as well as for other tasks and hyperparameter settings in the paper revision later. In scale with our data size, we use the LLaMA implementation from HuggingFace with 768 hidden size, 8 hidden layers, and 8 attention heads. The setting is the same as GPT used in the paper. We train the model for 200 epochs. | Property (Units) | α (Bohr³) | Δε (meV) | ε_HOMO (meV) | ε_LUMO (meV) | μ (D) | C_v (cal/mol·K) | |--|-|-|--|--|--|--| |Data|0.10| 64| 39 | 36| 0.043| 0.040| |Random |9.01| 1470| 645| 1457|1.616| 6.857| |GEOLDM|2.37| 587| 340| 522 | 1.108 | 1.025 | |Geo2Seq with Mamba| **0.46**| **98**| 57 | 71| 0.164 | **0.275**| |Geo2Seq with GPT| 0.53|102|**48**|**53**|**0.097**|0.325| |Geo2Seq with LLaMA| 0.71|102|51|98|0.324|0.357| - As shown above, Geo2Seq with LLaMA achieves significantly better results over the baselines and performs similarly as Geo2Seq with GPT for most properties, without careful tuning of hyperparameters. This is expected given the functional architectures of LLaMA and GPT are very similar. We have included the results and discussions in the paper revision. We sincerely thank you for your time! Hope we have addressed your concerns through practical efforts and shown the contributions and significance of our work. We look forward to your reply and further discussions, thanks! Sincerely, Authors
null
null
null
null
null
null
RAGGED: Towards Informed Design of Scalable and Stable RAG Systems
Accept (poster)
Summary: This paper carries out empirical analysis to shed light on the impact of retrieval in a RAG system: 1/ when retrieval is needed, 2/ impact of retrieval depth, 3/ noisy retrieval, 4/ relation between retrieval improvements to final performance improvements. The paper proposes two metrics, RAG Stability Score and RAG Scalability Coefficient, to measure the robustness of a RAG system. Experiments are carried out on NQ, HotpotQA, and BioASQ dataset. The main findings show that: 1/ retrieval may not help depending on if model is robust to noise, 2/ when increasing retrieval depth some models improve-then-plateau some models peak-then-decline, 3/ model noise robustness is more important than retrieval noise filtering, 4/ retriever improvements does not always lead to better response quality. Claims And Evidence: This is an empirical paper where the findings are model-dependent. The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: I have the following concerns over the proposed metrics: 1. The proposed RAG Scalability Score (RSS) is defined as a symmetric measure, yet retrieval results are inherently asymmetric. Removing higher-ranked retrieval results has a greater negative impact than adding lower-ranked ones. Given this asymmetry, it is unclear why RSS can be considered a reliable and stable metric. 2. I find the RAG Scalability Coefficient (RSC) is difficult to interpret. It is defined as the product of the last retrieval depth before performance plateaus and the cumulative improvements, but the rationale behind this multiplicative relationship is unclear. Additionally, this metric depends on a hyperparameter $\epsilon$, which may be challenging to tune in practice. Theoretical Claims: Not Applicable Experimental Designs Or Analyses: Dataset: Experiments are carried out on NQ, HotpotQA, and BioASQ datasets. However, I think NQ and HotpotQA are not diverse or representative enough for RAG evaluation due to many facts are memorized by LLMs. Evaluation metric: unigram-based score may not reliably measure LLM performance due to challenges in lexical matching. Supplementary Material: No. Relation To Broader Scientific Literature: This paper offers empirical findings and suggestions on developing RAG systems. The evaluation setups are similar to literature and the paper proposes two new metrics, RAG Stability Score and RAG Scalability Coefficient, to measure the robustness of a RAG system. Essential References Not Discussed: No Other Strengths And Weaknesses: I liked the empirical findings and insights from this paper. But I think the proposed metrics are not well-defined and the datasets may not cover representative RAG use case. Other Comments Or Suggestions: Minor suggestion: it is recommended to produce figures in vector format, e.g., pdf. Questions For Authors: Your findings appear to be closely tied to the specific dataset or model used. How do you envision these insights applying to newer LLMs to time-sensitive questions, and what factors might influence their generalizability? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We appreciate the recognition of our empirical contributions and insights into retrieval dynamics, particularly the introduction of RSS and RSC as tools for understanding reader robustness and retrieval scalability. **1. Symmetry of RSS** Thank you for the insightful observation. While it is true that adding lower-ranked documents (right side) and removing top-ranked ones (left side) may have different theoretical impacts, our empirical results show that this asymmetry is not consistent in practice. For example, LLAMA2-7B degrades more on the left (removing documents), whereas LLAMA3-70B degrades more on the right (adding documents). This variation supports our use of a symmetric window, which offers a model-agnostic and balanced way to measure stability. We will clarify this in the revision.behaviors. We will clarify this rationale in the revised manuscript. **2. Clarifying Design of RSC** RSC is defined as the product of two complementary components: 1) the last retrieval depth before plateau/decline, which captures how far the model scales before benefits taper off, 2) the cumulative performance gain up to that point, which quantifies the total utility derived from increasing retrieval. By multiplying these factors, RSC distinguishes clearly between models that achieve quick but limited improvements (early plateau) and those that sustain meaningful improvements over deeper retrieval depths. The hyperparameter ε, it is intentionally user-defined, similar to early stopping in training. It allows users to specify what constitutes a meaningful gain, based on their deployment trade-offs. We chose ε = 0.5 for RSC based on empirical analysis: the standard deviation of F1 across all models and k values is consistently below 0.5 (mean ≈ 0.38, max = 0.46). This makes 0.5 a conservative but meaningful threshold that exceeds metric noise. Moreover, increasing ε to 0.6 or 0.7 preserves model ranking, indicating robustness to the threshold choice. **3. F1 vs. Semantic Metrics** We recognize that unigram-based F1 may underrepresent semantic correctness. To address this, we validated our findings using an LLM-based semantic correctness metric on a subset of responses (Appendix H), and found strong alignment with the trends captured by F1. We will make this point clearer in the revised version. **4. Memorization of Dataset by LLMs.** Thank you for the comment. We agree that newer datasets reduce the risk of memorization. Our original evaluation includes NQ (2019), HotpotQA (2018), and BioASQ (2023) to cover open-domain, multi-hop, and biomedical QA settings. To extend this, we incorporate a 2024 dataset, CRAG, as suggested by reviewer jqB4. We ran preliminary experiments on CRAG using two representative readers: FLAN-T5 (of improve-then-plateau behavior) and LLAMA-2 (of showing peak-then-decline). These behavioral trends remain consistent on CRAG, suggesting that the retrieval-depth dynamics generalize well to newer datasets. We will include full CRAG results in the camera-ready version. | Model | k=1 | k=5 | k=10 | k=20 | k=30 | k=50 | |-----------|-------|-------|-------|-------|-------|-------| | FLAN-T5 | 0.190 | 0.174 | 0.178 | 0.175 | 0.175 | 0.175 | | LLAMA | 0.204 | 0.227 | 0.227 | 0.227 | 0.227 | 0.227 | **4. Applicability to Newer LLMs and Time-Sensitive Queries** We conducted additional experiments on dynamic questions in CRAG using both FLAN-T5 and LLAMA, and observed the following: - FLAN-T5 maintains its improve-then-plateau retrieval-depth behavior on dynamic questions, consistent with its pattern on static ones. This suggests strong generalization and robustness to retrieval noise across domains. - LLAMA, in contrast, shifts from a peak-then-decline trend on static questions to an improve-then-plateau trend on dynamic questions. We attribute this to differences in internal knowledge: on static questions, LLAMA already knows the answer and retrieval introduces redundancy or contradictions, degrading performance at higher depths. On dynamic questions—where LLAMA lacks internal knowledge—it relies more heavily on retrieved context, and even noisy documents provide helpful signal, leading to continued improvement. This contrast reinforces the value of our reusable framework: even when retrieval dynamics vary across models or tasks, RSS and RSC offer a principled way to detect, quantify, and interpret these differences. Rather than assuming fixed behavior, our reusable framework and metrics help uncover how model robustness and scalability shift across domains. Wwe expect newer LLMs—particularly those fine-tuned for retrieval-augmented tasks—to exhibit higher RSS and RSC due to improved handling of noisy or diverse context. Regardless of the model architecture, however, we believe our metrics remain essential tools for diagnosing retrieval sensitivity and informing robust, real-world RAG deployment. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their detailed response, which has addressed some of my concerns. I liked the paper’s insightful empirical findings; however, I believe that the proposed metrics (RSS and RSC) should be refined to better capture retrieval asymmetry and robustness to variances of the evaluation results. I also appreciate the new results for Llama2, but given the rapidly evolving landscape of LLMs, it is uncertain how these empirical findings presented in the paper remain relevant over time. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful follow-up and for acknowledging the insightful findings of our work. **1. Symmetry of RSS** Thank you for raising this important point. While RSS currently uses a symmetric window to measure stability around the optimal retrieval depth, we agree that directional sensitivity can be important in practice. For example, some applications may face higher risk from over-retrieval (e.g., latency, cost, or noise from irrelevant documents), while others may be more vulnerable to under-retrieval due to conservative defaults. A directional variant of RSS (e.g., reporting LHS-RSS and RHS-RSS separately) could help identify whether a model is more fragile to adding versus removing context. This would allow practitioners to tune systems more cautiously based on which side is riskier, or to design guardrails accordingly. That said, our empirical findings show that this asymmetry is not consistent across models, which motivates the use of a symmetric metric as a general-purpose diagnostic. Still, we agree that directional extensions are a valuable refinement and will note this as future work in the revised paper. **2. Robustness to Evaluation Variance** To assess the sensitivity of our metrics, we computed the standard deviation of F1 across all models and k values (mean ≈ 0.38, max ≈ 0.46). We also verified that varying the key hyperparameters: ε for RSC (e.g., from 0.5 to 0.7) and δ for RSS (e.g., ±5 vs ±10) does not affect model rankings. This supports the robustness of our findings and shows that the metrics are not overly sensitive to reasonable shifts in the evaluation setup. We will make this variance analysis explicit in the final version. **3. Generalization to Future LLMs** We appreciate the reviewer’s concern about evolving model capabilities. Our primary contribution is a reusable, model-agnostic evaluation framework—not a fixed set of empirical conclusions. RAGGED is designed to support the ongoing evaluation of emerging models by quantifying stability, scalability, and absolute performance across retrieval depths, question types, and domains. As systems change, these new metrics remain valuable tools for uncovering new trends, brittleness, or failure modes. At the same time, RAGGED has already surfaced practical, generalizable insights. Across diverse datasets and model families, we consistently find that reader robustness, not retriever strength, is the dominant factor driving RAG trends in scalability and stability. This trend holds even on the recent CRAG (2024) benchmark. As RAG systems continue to evolve, we believe tools like RAGGED are essential for evaluating new models, identifying where robustness breaks down, and guiding improvements in system design.
Summary: The paper presents RAGGED—a evaluation harness for retrieval-augmented generation (RAG) systems. The authors identify conflicting narratives in the previously published literature, and aim to resolve it especially around sensitivity to irrelevant documents. They examine how different retriever methods (e.g., BM25, ColBERT, Contriever) and reader models interact across various tasks and datasets (NQ, HotpotQA, and BioASQ). The framework introduces novel metrics—the RAG Stability Score (RSS) and the RAG Scalability Coefficient (RSC)—to quantify how stable and scalable a model’s performance is as retrieval depth increases. Ultimately, the paper argues that the robustness of the reader to noisy retrieval (i.e., its ability to filter out irrelevant passages) is the key determinant of overall RAG effectiveness. Claims And Evidence: Reader Robustness Is Paramount: The paper claims that a reader’s ability to handle noise is more critical than mere retriever quality for achieving stable and scalable RAG performance. Retrieval Depth Effects Are Model-Dependent: It argues that while some models benefit from more retrieved passages (improve-then-plateau behavior), others degrade when faced with increased noise (peak-then-decline behavior). Retrieval Improvements Yield Nonlinear Gains: Enhanced retriever performance (for instance, using dense retrievers) does not always translate into proportional improvements in reader performance, particularly for noise-sensitive readers. Domain-Specific Nuances Matter: The dynamics of retrieval and reading differ between open-domain tasks (like NQ and HotpotQA) and specialized domains (such as BioASQ), highlighting the need for tailored configurations. I find all the claims to be well supported with breadth but not with depth. For example, BM-25, contriever are both not SOTA for embeddings. 1. I would urge the authors to also evaluate their system on modern embeddings such as text-embedding-large (fromOAI) for closed source among many others, and some of the SOTA generative or discriminative models from HuggingFace mteb leaderboard (for opensource). 2. The datasets used by the authors are not the ideal use-cases to evaluate LLM's ability to retrieve. For example, HotPotQA documents are all within a few hundred to thousand tokens with very clear "knowledge" component which most models listed have seen before (given the high overlap with refinedweb and other pre-training datasets). To the extend HotPot pass@K is today used as a pre-training metric. Hence, I would encourage the authors to engage with newer RAG benchmarks such as CRAG Methods And Evaluation Criteria: Addressed above extensively Theoretical Claims: No Experimental Designs Or Analyses: I have already mentioned my concerns around the dataset used for eval, and the retrievers. I would also like to add that the LLMs used are quite dated. However, while I encourage the authors to consider updating this for camera-ready or a subsequent submission, I do NOT think this is a serious concern since it should not be on the authors to keep up with the fast moving model releases and the trends hold independent of the models. Supplementary Material: No Relation To Broader Scientific Literature: In my view this paper answers some of the open questions from the "Lost in the middle " paper by clearly defining the 2 new metrics: RSS and RSC. Essential References Not Discussed: Reference of CRAG for evals: https://arxiv.org/html/2406.04744v1 Other Strengths And Weaknesses: Addressed above Other Comments Or Suggestions: N/A Questions For Authors: In figure 5, flan-T5 RSS is at 0.99. Do you suspect this is a function of the eval setting (context length cap), or do the authors happen to have any additional insights? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive and thoughtful feedback. We appreciate that the reviewer recognizes that the paper resolves open questions in existing works and finds all the claims to be well supported. **1. Expanding to Newer Embedding Models** We appreciate the suggestion to evaluate newer embedding models. We are in the process of integrating models from the MTEB leaderboard, including leading open-source options like Linq-Embed-Mistral. However, due to the scale of the corpus (~111M passages) and the limited resources and time, we were unable to complete these runs within the rebuttal period. We are actively running the experiments and will include updated results in the revision. We agree that this extension will further strengthen our analysis. **2. Evaluating with CRAG** Thank you for highlighting the importance of more modern evaluation datasets. We agree that CRAG offers a more recent and challenging benchmark for RAG evaluation. To assess generalization, we ran preliminary experiments on CRAG during the rebuttal window using two representative readers: FLAN-T5 (of improve-then-plateau behavior) and LLAMA-2 (of showing peak-then-decline). These behavioral trends remain consistent on CRAG, suggesting that the retrieval-depth dynamics generalize well to newer datasets. One difference is that the overall performance is worse (shifted down) compared to the performance we observed in our paper, which is as expected since this is a newer dataset. We will include full CRAG results in the camera-ready version. | Model | k=1 | k=5 | k=10 | k=20 | k=30 | k=50 | |-----------|-------|-------|-------|-------|-------|-------| | FLAN-T5 | 0.190 | 0.174 | 0.178 | 0.175 | 0.175 | 0.175 | | LLAMA | 0.204 | 0.227 | 0.227 | 0.227 | 0.227 | 0.227 | **3. In Figure 5, FLAN-T5 RSS is at 0.99. Do you suspect this is a function of the eval setting (context length cap), or do the authors happen to have any additional insights?** Thank you for this insightful question. We investigated whether FLAN-T5’s high RSS score might be influenced by context length limitations, particularly around the optimal retrieval depth of k = 25. To assess this, we compared token lengths after truncation between k = 20 and k = 25. We found that in at least 32% of examples, the input at k = 25 includes more tokens post-truncation than at k = 20, confirming that additional retrieved content is being processed. Despite this added context, the performance difference between k = 20 and k = 25 is < 0.5 points, suggesting the model remains robust even as more content is introduced. This supports our interpretation that FLAN-T5’s high RSS is not an artifact of truncation shielding the model from additional noise, but rather reflects genuine retrieval-depth stability. We will include this clarification and supporting analysis in the revised manuscript.
Summary: The paper introduces RAGGED, a framework for evaluating Retrieval-Augmented Generation (RAG) systems. It emphasizes that RAG's performance depends not only on retrieval quality but also on the reader's robustness to noise. The study shows that reader robustness is the key factor for RAG stability and scalability, and that fine-tuning readers is more important than improving retrievers. RAGGED provides a structured way to optimize retrieval strategies and guide future research on developing robust, scalable RAG systems. Claims And Evidence: The RAG Stability Score (RSS) measures stability across retrieval depths, but stability is a complex metric. The paper assumes stability directly correlates with system performance, yet it doesn't address whether improved stability at a specific depth always leads to better task performance or just more predictable behavior. How does RSS capture true model performance beyond retrieval consistency? Can models with lower RSS but higher task-specific performance still be considered stable or robust? Methods And Evaluation Criteria: The RAG Stability Score (RSS) and RAG Scalability Coefficient (RSC) are introduced to measure stability and scalability across different retrieval depths. While these metrics provide some insight, they focus heavily on retrieval depth rather than directly measuring task-specific performance. Are these metrics comprehensive enough to evaluate the overall effectiveness of RAG systems in real-world tasks, especially when task-specific performance is more critical than retrieval stability? Theoretical Claims: The authors discuss the effects of retrieval depth on RAG performance, but without theoretical analysis, it remains unclear why these effects hold or how to generalize the findings beyond the empirical settings. The lack of theoretical grounding makes it harder to predict the behavior of RAG systems in novel conditions or tasks not covered in the paper. Experimental Designs Or Analyses: The experiments show variability in performance based on the reader-retriever combination (e.g., GPT-3.5 vs. CLAUDE vs. FLAN). While the inclusion of a range of models is a strength, it’s unclear how the variability between models is accounted for in the analysis. For example, a model like FLAN may benefit from retrieval more consistently than GPT-3.5, but are these differences purely due to the models’ architectures, or are there other factors like training data or task-specific tuning influencing the outcomes? Supplementary Material: The Appendix section provides necessary experimental details and evaluation metrics. Relation To Broader Scientific Literature: The key contributions of the paper extend the findings of prior work by highlighting the nuanced relationship between retrieval depth, model stability, and task-specific effectiveness. It provides a more refined view of RAG, emphasizing the importance of reader robustness and domain-aware retrieval strategies, while challenging the oversimplified notion that stronger retrieval always leads to better performance. These contributions push the research further in understanding and optimizing the interaction between retrievers and readers for RAG systems. Essential References Not Discussed: No Other Strengths And Weaknesses: This paper introduces the novel RAGGED framework for evaluating Retrieval-Augmented Generation (RAG) systems, focusing on reader robustness and retrieval depth. The authors provide a significant contribution by developing the RAG Stability Score (RSS) and RAG Scalability Coefficient (RSC), offering a structured way to assess model performance across different configurations. Their extensive empirical analysis across multiple datasets and models strengthens the paper’s validity and relevance to future RAG research. However, the paper lacks a theoretical foundation to explain its empirical findings, limiting the generalization of its results. Additionally, the discussion of key related works, such as REALM and FiD, is insufficient, and the evaluation metrics focus more on stability than end-task performance. Other Comments Or Suggestions: Figures 1 and 8 are not clear enough. Letters in Figure 1 are too small. The use of ** for bolding in the first paragraph of Section 2 of the article is not standardized. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We appreciate that the reviewer recognizes this paper provides a significant contribution by introducing structured metrics and is backed by extensive empirical analysis across multiple datasets and models. **1. RSS and its Relationship to Task-Specific Performance** Indeed, RSS is not intended to capture absolute task performance, and we were careful not to make such a claim. Instead, it measures the stability of performance near the optimal retrieval depth—capturing sensitivity to retrieval parameter changes, not performance magnitude. As the reviewer notes, these are distinct dimensions of system behavior. **2. Metric Comprehensiveness and Practical Relevance** While absolute performance is essential, stability and scalability are equally critical in real-world deployment: - Stability (RSS) robustness to retrieval depth variation, which is particularly important in real-world scenarios where precise tuning is costly. - Scalability (RSC) reflects how well a model benefits from deeper retrieval, which is especially important when relevant information is sparse or buried. Thus, while task-specific performance is foundational, it does not provide a complete picture of real-world robustness or efficiency. We believe that stability, scalability, and absolute performance together form a more holistic and practical evaluation of RAG systems. We also discuss the importance of absolute task performance in sections 4, 6, 7, paying attention to compare with-context and no-context performance to see when RAG actually helps. **3. Can models with lower RSS but higher task-specific performance still be considered stable or robust?** Yes, a model can have high task-specific performance but low RSS. This would however mean it is brittle and harder to deploy reliably. This underscores the value of explicitly measuring stability and scalability alongside performance. **4. Theoretical analysis for generalizing findings beyond the empirical settings.** We appreciate the reviewer’s interest in generalizing the findings. While our focus is empirical, we hypothesize that a key factor driving behavior is the internal-external knowledge tradeoff: models with strong internal priors but weak integration of external input may degrade with increased retrieval. Our empirical results support this. For example, LLAMA begins with high no-context performance, and, at low k, its with-context answers closely mirror its no-context outputs—indicating strong reliance on internal knowledge. However, as more documents are retrieved, LLAMA’s answers diverge from its initial predictions (as shown in the attached figure), yet performance worsens. This suggests that LLAMA is incorporating context—but in a way that disrupts rather than improves its predictions. In contrast, FLAN shows lower no-context overlap from the start, adapts more readily to retrieved information, and maintains more stable performance as k increases. We will incorporate this framing into the revised draft to better contextualize model behavior across tasks and domains. **5. A model like FLAN may benefit from retrieval more consistently than GPT-3.5, but are these differences purely due to the models’ architectures, or are there other factors like training data or task-specific tuning influencing the outcomes?** Regarding FLAN-T5 being more scalable than GPT-3.5, our observations suggest that differences could arise from both architecture and training objectives: - FLANT5 explicitly has denoising training, where it learns to reconstruct original text from corrupted inputs, potentially helping it handle noisy or partially irrelevant retrieval contexts better. GPT-3.5, in contrast, does not explicitly train with a denoising objective, possibly explaining its comparatively limited scalability. - FLANT5 (encoder-decoder) explicitly encodes the retrieved context separately from decoding (generation), potentially enabling it to manage larger context sets more effectively. GPT-3.5, being decoder-only, processes context in a strictly autoregressive manner, which may cause diminishing returns as the amount of context increases due to difficulty in attending equally well to all retrieved information. **6. Related Work: REALM and FiD** Thank you for raising this. We agree that REALM [Guu et al., 2020] and FiD [Izacard & Grave, 2021] are foundational RAG systems: REALM introduced end-to-end retriever-reader pretraining and FiD showed the benefits of fusing multiple retrieved passages via decoder-only models. Our work is complementary: RAGGED offers tools to analyze reader-retriever behavior under varying retrieval depth and noise—dimensions not explicitly explored by these prior works. We will expand our Related Work section to reflect these connections. **7. Other Comments Or Suggestions** Thank you for the suggestions about the figures and formatting. We will fix them in the revision.
Summary: This paper introduces RAGGED, a systematic framework for evaluating Retrieval-Augmented Generation (RAG) systems, focusing on stability, scalability, and robustness to noise. The authors analyze how retrieval depth, retriever-reader interactions, and dataset characteristics influence RAG performance, challenging the assumption that stronger retrievers universally improve results. Claims And Evidence: The majority of claims are well-supported by systematic experiments and cross-model/dataset validation. Key limitations (threshold justification, closed-model opacity) do not invalidate the core findings but highlight areas for future work. The evidence convincingly demonstrates that reader robustness—not retriever quality—is the critical factor in RAG performance. Methods And Evaluation Criteria: The methods and evaluation criteria are well-suited for the paper’s goals. The framework’s design—diverse datasets, retrievers, readers, and noise conditions—provides actionable insights into RAG stability and scalability. While the novel metrics (RSS/RSC) and semantic validation could be refined, they represent a meaningful step toward standardized RAG evaluation. The experiments convincingly demonstrate that reader robustness, not retriever quality, is the critical factor, validating the utility of the framework. Theoretical Claims: No theoretical claims are proposed in this paper. Experimental Designs Or Analyses: The experimental designs are largely sound for the paper’s goals, with rigorous testing across datasets, retrievers, and readers. However, the validity of conclusions is partially limited by arbitrary metric thresholds, insufficient statistical testing, and synthetic noise assumptions. Addressing these issues would strengthen the framework’s generalizability and robustness claims. Supplementary Material: Yes, about detailed experimental results. Relation To Broader Scientific Literature: The RAGGED framework advances the literature by formalizing robustness and scalability metrics for RAG systems, contextualizing instruction tuning’s role in noise tolerance, and validating semantic evaluation with LLM judges. Its innovations build directly on foundational RAG, robustness, and scaling literature while addressing underexplored challenges in real-world deployment. However, deeper engagement with fine-tuning-based RAG methods and realistic noise models would strengthen its positioning. Essential References Not Discussed: Some related work about Adversarial Training of RAG. Other Strengths And Weaknesses: Strengths: 1. The introduction of RSS (Retrieval Scale Sensitivity) and RSC (Retrieval Scale Cost) metrics formalizes robustness and scalability evaluation for RAG systems in a unified manner, addressing a gap in prior work that often treated these aspects separately. 2. Integrates concepts from adversarial robustness (e.g., noise injection) and computational efficiency (e.g., scaling with corpus size) into a cohesive evaluation paradigm, bridging domains like adversarial ML and distributed systems. 3. Provides actionable insights for practitioners to benchmark and optimize RAG systems, particularly in noisy or large-scale environments (e.g., enterprise search, customer support). Weaknesses: 1. Relies on random non-relevant passages for robustness testing, neglecting adversarial or temporally inconsistent noise (e.g., adversarial retrieval attacks, outdated facts), which limits practical relevance. 2. Experiments may not explore extreme-scale corpora (e.g., billions of documents), leaving scalability claims incomplete. 3. Absent theoretical guarantees (e.g., bounds on RSS/RSC under noise, convergence properties), leaving the framework purely empirical. Other Comments Or Suggestions: 1. Include adversarial perturbations (e.g., query paraphrasing, passage rewriting via LLMs) and temporal noise (e.g., outdated documents) to enhance robustness evaluation. 2. Benchmark against retrieval attacks to stress-test RSS. 3. Add a subsection analyzing the relationship between noise magnitude (e.g., % of corrupted passages) and RSS, possibly deriving error bounds. Questions For Authors: 1. Your robustness experiments inject random non-relevant passages as noise. Could your conclusions about RSS hold under more realistic noise scenarios, such as adversarial perturbations (e.g., passages with semantically similar but incorrect answers) or outdated documents? If not, how might this limitation affect the practical applicability of RSS? 2. The scalability analysis assumes a fixed retrieval depth $k$. How would RSC metrics change if evaluated with adaptive retrieval strategies (e.g., FLARE’s iterative retrieval based on uncertainty)? Could such strategies invalidate the trade-offs observed in your experiments? 3. Can you provide formal analysis (e.g., bounds) linking noise magnitude (e.g., % of corrupted passages) to RSS scores? For example, is there a threshold beyond which performance degradation becomes inevitable, regardless of retriever architecture? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We especially appreciate the recognition of how RAGGED addresses a gap in the literature by providing a unified framework for systematically evaluating scalability and stability and deriving actionable insights for real-world deployment. **1. Noise Assumptions in Robustness Evaluation** We appreciate the reviewer’s clarifying question about the robustness of our setup. To clarify: our “noise” is not artificially injected or random. Rather, it consists of non-gold passages retrieved by a real retriever. These are often topically relevant but incorrect, representing a realistic and common failure mode in deployed RAG systems. This setup reflects naturally occurring retrieval imperfections that practitioners routinely encounter. We agree that adversarial or temporally inconsistent noise (e.g., contradictory or outdated passages) is an important direction for future robustness research. However, modeling those scenarios requires assumptions (e.g., degree of contradiction, factuality, intent) that are beyond the scope of this work. That said, RSS could readily be extended to those settings, and we hope future work on adversarial or knowledge-conflict RAG systems will incorporate it. **2. Applicability to Extreme-Scale Corpora** We agree that testing scalability on corpora with billions of documents would be highly valuable. However, such datasets with annotated gold passages are currently limited. RAGGED is designed to be scalable by construction and can be readily applied to larger corpora as they become available. **3. Empirical vs. Theoretical Foundations** We appreciate the reviewer’s point. Our focus is empirical by design, aiming to provide actionable metrics for analyzing RAG system behavior in real-world settings. This mirrors the trajectory of many standard metrics (e.g., BLEU, ROUGE, F1), which were adopted for their practical value before receiving formal analysis. That said, we agree that theoretical grounding—such as bounding the relationship between noise and RSS—would strengthen the framework, and we will note this as an important direction for future work. **4. How would RSC metrics change if evaluated with adaptive retrieval strategies (e.g., FLARE’s iterative retrieval based on uncertainty)? Could such strategies invalidate the trade-offs observed in your experiments?** This is an excellent and insightful point. Our current analysis focuses on the standard retrieve-then-generate paradigm with a fixed top-k retrieval depth, which remains a common baseline in RAG systems. However, the core idea behind RSC—measuring how performance scales with increased retrieval—can be naturally extended to adaptive retrieval strategies like FLARE. In fixed-depth retrieval, RSC reflects the trade-off between performance gains and increasing the retrieval cutoff. In adaptive systems like FLARE, a similar trade-off exists between performance and the number or frequency of retrieval calls as determined by an uncertainty threshold. One could analogously define an RSC-style metric that varies the retrieval triggering threshold and measures how performance scales with the total retrieval budget. This would preserve the core RSC insight: quantifying trade-offs between retrieval cost and performance gain. We will mention this extension as a valuable future direction. **5. Justification of Metric Thresholds** We chose ε = 0.5 for RSC based on empirical analysis: the standard deviation of F1 across all models and k values is consistently below 0.5 (mean ≈ 0.38, max = 0.46). This makes 0.5 a conservative but meaningful threshold that exceeds metric noise. Moreover, increasing ε to 0.6 or 0.7 preserves model ranking, indicating robustness to the threshold choice. For RSS, we use a window of δ = ±5, which aligns with practical tuning ranges and matches how performance curves behave in most models (either plateauing or gently peaking). We also tested δ = 10 and found rankings unchanged. Smaller windows (e.g., δ = 1) lead to RSS variances across models of < 0.002, making the metric insensitive and less useful. **6. Statistical Robustness and Variability** To address concerns about statistical rigor, we analyzed standard deviations across k values for each model and confirmed that they remain consistently low (all < 0.5). Additionally, we verified that model rankings remain stable under variations in ε and δ. These findings support the reliability and robustness of our conclusions, and we will include supporting variance analyses in the revised draft.
null
null
null
null
null
null
A Checks-and-Balances Framework for Context-Aware Ethical AI Alignment
Accept (poster)
Summary: This work introduces a three-branch checks-and-balances framework for ethical alignment in LLMs. The framework incorporates emotional modeling to distinguish linguistic behaviors in documents. It includes the DIKE module, serving as the "legislative branch" for establishing ethical standards, and the ERIS module, functioning as the "judicial branch" for contextual interpretation. Additionally, the framework introduces a debate mechanism between DIKE and ERIS to address ethical scenarios while maintaining cultural sensitivity and contextual awareness. Claims And Evidence: This work is an interesting work. The effectiveness of key modules, such as BEAM, is demonstrated through experiments. However, these experiments are preliminary, relying on a single dataset and specific LLM model settings. The paper claims to address complex ethical challenges across different cultural contexts, which is indeed an important and unsolved problem. However, its effectiveness has not been validated on morality-related datasets, i.e., MoralChoice Dataset [1], limiting the strength of its claim. Additionally, the work claims to avoid the "Whack-A-Mole" and "reward hacking" problems in RLHF, but further detailed evidence is needed to substantiate this claim. A more rigorous evaluation with diverse datasets and broader experimental settings would strengthen the validity of the proposed approach. [1] Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. Evaluating the moral beliefs encoded in llms. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2023. Methods And Evaluation Criteria: The method has not been comprehensively evaluated, as its assessment is limited to a small dataset and specific LLMs. Certain components, such as the debate mechanism, remain unevaluated, and the full framework workflow has not been systematically tested. As the authors acknowledge, “this article focuses on addressing three critical questions rather than providing a comprehensive evaluation of our proposed modules” Theoretical Claims: no proof Experimental Designs Or Analyses: The current experimental results demonstrate preliminary effectiveness in Emotion Layer Evaluation, Behavior Classification, and Behavior Correction. Introducing an additional LLM-augmented dataset to expand the coverage of emotions included in Figure 1 would enhance the soundness of the experiments. Supplementary Material: I have reviewed the whole supplementary material Relation To Broader Scientific Literature: The moral dilemma [1], particularly in the context of diverse cultural backgrounds, presents a challenge in designing human-AI interaction. This paper may offer a potential approach to addressing this challenge, though it lacks thorough effectiveness validation. Another valuable contribution, in my view, is its interdisciplinary perspective. By leveraging an emotion model to identify linguistic behavior in documents, it provides a novel and fine-grained approach to behavior recognition, which can guide the modification of LLM-generated content. [1] Zhou, Ziyi, et al. "Unveiling the Bias Impact on Symmetric Moral Consistency of Large Language Models." Advances in Neural Information Processing Systems 37 (2024): 41303-41326. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Please see above Other Comments Or Suggestions: Please see above Questions For Authors: 1. How is the behavior spectrum in DIKE constructed? 2. What is the theoretical explainability for relying on GPT-generated emotion vectors instead of directly classifying behavior? 3. Given that the accuracy of GPT’s emotion recognition and frequency vector construction is crucial, how does LLM emotion recognition error impact the overall framework’s effectiveness? 4. Are the ethical guardrails set by system administrators enforced as hard constraints, or can they be bypassed in certain cases? If ethical guardrails are rigidly enforced, should system administrators' decisions themselves be subject to scrutiny? Would granting administrators full control over ethical constraints be an ethically optimal approach? 5. If the debate process in Table 1 fails to reach a consensus, is the final decision always made by human experts? Does this imply that human intervention is the only viable solution to complex ethical dilemmas? The description in Section 4.1 is unclear, particularly regarding the second and third columns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments. We agree that validating on morality datasets and expanding emotion-labeled texts are important directions. We mitigate LLM-generated emotion vector noise through aggregation (two LLMs plus human annotators), diverse sources, and ERIS adversarial review. Regarding our use of love letters: We attempted using hate speech datasets but faced challenges: (1) available datasets weren't comprehensive, and (2) LLMs refused to process them, citing policy violations. Our experience with a political debate paper triggered emotional responses from colleagues, forcing us to reconsider our approach. Below, we address each of your specific questions: Q1: How is the behavior spectrum in DIKE constructed? A1: The behavior spectrum in DIKE is constructed along a continuous scale from -1 to +1. Taking "love" as an example: Figure 2 shows this spectrum ranging from +1 (strong affection) to -1 (antipathy). Figure 2a displays the emotions that GPT-4 associates with each intensity level when directly prompted. To validate this approach, we had GPT-4 rewrite 54 love letters across seven targeted behavior intensities: -1, -0.7, -0.4, 0, 0.4, 0.7, and 1 (shown in rows in Figure 2). We then counted the emotion terms expressed in these rewrites, as shown in Figure 2b Comparing Figures 2a and 2b reveals a key insight: while GPT-4's initial theoretical mappings appear neat and categorical, the rewritten letters exhibit complex emotional blends. Letters expressing the highest intensity of love frequently contain not only joy and longing but also elements of sadness, anxiety, or despair (as evidenced in Keats' letters). This demonstrates that real linguistic behavior involves contextually layered emotions and non-linear mapping. We've successfully applied similar behavior-to-emotion mapping in our previous work on debate contentiousness, which also produced interpretable and insightful emotion profiles. Q2: What is the theoretical justification for using GPT-generated emotion vectors instead of directly classifying behavior? A2: As demonstrated in our answer to the first question, real emotional expression in language is contextually complex and non-linear. The comparison between Figures 2a and 2b reveals that even when GPT-4 attempts to generate content at a specific behavior intensity (such as +1 for love), the resulting text contains a blend of emotions. This empirical finding supports our theoretical approach of using emotion vectors rather than direct behavior classification. Emotion vectors offer a continuous and interpretable representation of affective tone, which is more granular than categorical behavior labels. By mapping text to an emotion distribution first (as shown in Figure 2b), we capture the rich emotional blends that exist in authentic expression. For instance, the love letters rewritten at intensity +1 still contained elements of sadness and anxiety alongside joy and longing. Direct behavior classification would miss these subtleties. Q3: Given that the accuracy of GPT’s emotion recognition and frequency vector construction is crucial, how does LLM emotion recognition error impact the overall framework’s effectiveness? A3: We mitigate LLM-induced noise through: 1. Multiple rewrites per document to stabilize emotion distributions 2. Aggregation across different models and human raters 3. ERIS as adversarial reviewer, flagging questionable outputs These augmentations aren't perfect; more validation is needed. Nevertheless, our unsupervised learning findings are both surprising and useful. Q4: Are ethical guardrails hard constraints or bypassable? Should administrator decisions be scrutinized? A4: Guardrails are configurable soft constraints, defining target bounds on emotional-behavioral spectra enforced by DIKE, but ERIS may challenge them based on cultural/contextual grounds. If consensus isn't reached, escalation to human moderators occurs. This ensures ethical enforcement isn't absolute and norms can be scrutinized. This raises important questions about ethical authority—a governance transparency challenge, not a settled feature. Q5: If the debate process fails to reach consensus, is the final decision always made by human experts? A5: Yes. The goal isn't determining winners but unearthing all perspectives, then presenting these to humans for decisions. Table 1 outputs Θ+ and Θ-, arguments and counterarguments with justifications—more transparent than current RLHF. Q6: The description in Section 4.1 is unclear, particularly regarding the second and third columns. A6: Do you refer to Table 1 in Section 4.1? It is generated by querying GPT4 the description (column 2 ) and dominating emotions (column 3) of each behavior. Figure 2a presents that mapping. With Figure 2a, Table 1 looks redundant, and we can remove it. We thank the reviewer for these valuable questions that will strengthen our work.
Summary: This paper proposes a three-branch checks-and-balances framework for the ethical alignment of LLMs, inspired by governmental separation of powers. The framework consists of three independent but interacting components: LLMs (Executive branch), DIKE (Legislative branch), and ERIS (Judicial branch). Unlike RLHF which struggles with social bias, reward hacking, and catastrophic forgetting, this framework provides interpretable, adaptable, and culturally aware ethical reasoning. The paper also proposes adversarial testing via ERIS: DIKE’s ethical rules are stress-tested by ERIS, which challenges decisions using diverse cultural perspectives, ensuring balanced, context-aware moderation. Claims And Evidence: All claims are well-supported. Methods And Evaluation Criteria: The study validates the framework through three experiments: emotion layer evaluation, behavior classification, and adversarial review, demonstrating this new AI ethical alignment mechanism. It avoids the limitations of RLHF, ensuring stable ethical standards while adapting to different cultural contexts. Theoretical Claims: This work is based on solid psychological experiments and emotion research, ensuring theoretical correctness. Experimental Designs Or Analyses: The study validates the proposed framework through three key experiments: emotion layer evaluation, behavior classification, and adversarial ethical review. In the emotion layer evaluation, self-supervised learning is used to construct an emotion-behavior mapping, improving classification accuracy by 11.3% over GPT-4’s zero-shot inference. In the behavior classification task, DIKE demonstrates greater robustness than GPT-4’s zero-shot classification, effectively capturing subtler emotional variations. The adversarial ethical review experiment employs the DIKE-ERIS adversarial mechanism to ensure balanced ethical oversight. By challenging DIKE’s ethical decisions with diverse perspectives, ERIS prevents over-censorship while maintaining ethical consistency. Overall, the experimental design is well-structured and justified, demonstrating the effectiveness of this novel ethical alignment framework. Supplementary Material: No supplementary materials are provided, but Appendix is well-structured. Relation To Broader Scientific Literature: This paper is closely related to emotion and behavior modeling across various domains, with a particular focus on their applications in AI ethics. Additionally, it is tightly connected to techniques used in AI Alignment Post-training, such as RLHF and RLAIF. Essential References Not Discussed: Current related works are well-structured and enough for this paper. Other Strengths And Weaknesses: This framework has several advantages. First, it separates behavior modeling from knowledge modeling, preventing catastrophic forgetting of knowledge. Second, it emphasizes AI ethics at the behavioral level, enhancing interpretability and helping regulators refine behavioral guardrails. Third, it models behaviors through emotions, allowing for a more nuanced understanding of ethical alignment. Finally, it ensures adaptability and fairness by incorporating an adversarial module, ERIS, which challenges ethical boundaries through diverse perspectives, fostering nuanced and balanced decision-making. Other Comments Or Suggestions: The paper is well-structured and clear. Questions For Authors: All the claims except knowledge modeling are well supported. One question is: In the DIKE-ERIS adversarial mechanism, ERIS challenges DIKE’s ethical rules by simulating extreme cases to enhance the system’s ability to handle complex ethical scenarios. Additionally, when problematic content is detected, DIKE makes targeted adjustments rather than simply deleting or blocking it, preserving the original emotional expression. For example, offensive language may be transformed into a more neutral yet semantically equivalent statement. How does this process, as the authors claim, prevent catastrophic forgetting of knowledge? I understand that it does not directly affect the LLM itself, but during content replacement, it could potentially introduce new hallucinations. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for kind support and raising this important point. Specifically to answer your question, the key to preventing catastrophic forgetting lies in our framework’s architectural separation of knowledge modeling (LLMs) from behavior regulation (DIKE) and ethical judgment (ERIS). As outlined in Section 3.1 and 3.2, DIKE does not fine-tune or overwrite the LLM’s internal weights; instead, it operates as an external behavioral filter that evaluates and, if necessary, rewrites output at the surface linguistic level. While your concern about potential hallucinations during content rewriting is valid, our framework is designed to minimize semantic drift in two key ways: 1. Emotion-Constrained Rewriting: Rewriting is guided not by free generation but by quantifiable emotional spectra (BEAM), ensuring emotional fidelity and preserving communicative intent. 2. Adversarial Validation with ERIS: ERIS acts as a dialectical reviewer, challenging DIKE’s revisions to ensure both contextual appropriateness and semantic alignment. As shown in Section 3.4, this adversarial review helps flag hallucinations or misinterpretations before final output. Since the base LLM remains untouched, any hallucinations introduced during DIKE’s rewriting are non-permanent and observable, allowing for iterative refinement without compromising the model’s original capabilities. This contrasts with fine-tuning approaches, where hallucinations can become embedded in the model weights and are more difficult to reverse. We agree that hallucination detection and control is a crucial area for future development. As a next step, we plan to integrate fact-checking or retrieval-augmented modules into ERIS to verify semantic fidelity. Furthermore, we envision a semi-automated human-in-the-loop pipeline, where raters review and annotate ERIS’s edits to ensure factual correctness and maintain output integrity and quality. Again, thank you so much for your thorough review.
Summary: The authors propose a framework, that initially classifies a given document on a spectrum of emotions. If the document falls outside of certain constraints on that spectrum of emotions, the framework uses an adversarial process between the classification module, called Diagnostics, Interpretation, Knowledge-independent learning, and Ethical guardrails (DIKE), and another module, which is called ERIS. The recommendation reached through that process is then used to regenerate the document, so that it falls within the emotional constraints. ## update after rebuttal I am still maintaining my original score, because: * the evaluation was not expanded and still has a limited scope (only 54 data points) * in my opinion, the writing is not very clear * the description is lacking technical details, which in my opinion would not allow a reader to reproduce their system/experiments Claims And Evidence: The article makes a lot of claims and is not very precise in its terms, which makes it hard to actually understand what is going on, for example the authors speak about adhering to ethical principles, but actually do some sort of emotional alignment of the generated text. They also claim that their framework can incorporate different cultural backgrounds, but the paper itself never explains how this is done on a technical level. The paper is also very high-level in general without much technical details. Initially the authors present their approach as an alternative to RLHF, but they never present any tangible advantages during their evaluation. Methods And Evaluation Criteria: Only parts of the framework are analyzed with a small dataset. I do not feel like they actually show that their approach has any advantages over existing systems. Theoretical Claims: The article does not use much theory, which therefore can not be checked. Experimental Designs Or Analyses: The authors provide no end-to-end results and only evaluate parts of their framework on a very small dataset with 66 samples. Since there are not end-to-end results presented, there is also not ablation study which parts of the framework contribute to which aspects of the results. Costs of the approach are not discussed at all. There is no ground truth or any baseline established for the analysis in Figure 2. Inconsistent spikes between DIKE and humans for example for 'hopeful' are not discussed. Supplementary Material: I only read the appendix. Relation To Broader Scientific Literature: I am not sure, whether this frameworks actually represents anything novel. It applies some insights from psychology to the alignment of AI generation, but the used terms are not very precise, so it makes it harder to understand the paper and the framework. Applications of the framework are only discussed in passing or not at all. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: * introduction: * "Users have reported that optimizing one task on ChatGPT degrades performance in others (Kirkpatrick et al., 2017; Rusu et al., 2015)." - ChatGPT was released in 2022, so how can a paper from 2017 be relevant? Even GPT-1 was just released in the same year as the Kirkpatrick article, which contains no mentions of GPT. * very inconsistent: sometimes the LLM is used for knowledge generation, later for processing * 2.4 just reiterates content from the introduction with two additional references * 3.2: I found the description confusing as the BEAM spectra and the letter spectrum partially contain the same terms such as joy. * 3.3, revised statement: does not actually contain "newcomer" as latter is alleged * paragraph at line 360-370 seems like it could breach anonymity * no prompts * no impact statement -> should be there, especially for such a topic Other Comments Or Suggestions: * line 50: "to make bed" - grammar: "to make their bed" * line 270, second column: "newcomers," - comma should be removed * line 306: "DIKE”s" - two times apostrophe (same in line 307) * Table 1: #3 missing closing bracket for while loop * line 370: probably wrong cite command, should be \citep in my opinion * line 352, second column: "(James, 1890). and" - dot should be removed * line 371, second column: "than 0.3 or a scale" - probably "on" instead of "or" * line 655: "Section 2a" - document reference points to Figure 2a instead * line 622 to 626: there is content missing here - sentences are not complete * line 764: "(1971 - 1855)" - birth year after year of death; according to Wikipedia, it is actually 1770–1850 * references: * inconsistent use of title case for the titles * "A general theoretical paradigm to understand learning from human preferences" - missing place of publication * "Constitutional ai: Harmlessness from ai feedback" - missing place of publication * "EVINCE: Optimizing Adversarial LLM Dialogues via Conditional Statistics and Information Theory" - cited differently than the other arXiv papers * "Deep reinforcement learning from human preferences" - missing place of publication; cited twice * "GPT-4 Technical Report" - cited differently than the other arXiv papers * "Training language models to follow instructions with human feedback" - missing place of publication * "Defining and characterizing reward hacking" - missing place of publication * "Learning to summarize from human feedback" - cited differently than the other arXiv papers Questions For Authors: * introduction: The motivational examples are not clear: Why are they relevant? * 3.4: How does table 1 represent ERIS, if it shows the algorithm how the interaction between DIKE and ERIS is implemented? * How ERIS works is not shown at all? * How can ERIS be customized? * evaluation: If there are 54+12=66 data samples and 24 are reserved for testing, how can Figure 2b illustrate the results for 54 letters? * line 350-353, second column: "This concept is supported by Deisseroth’s optogenetic studies (Deisseroth, 2015), discussed in William James’ “The Principles of Psychology” (James, 1890). and corroborated in Minsky’s “Society of Mind” (Minsky, 1988)." - How can an article from 2015 can be discussed in 1890 and 1988 respectively? But maybe I am misreading the sentence. * 4.2: How can a higher prediction entropy be an indicator that DIKE performs better than Zero-Shot GPT4, but an even higher entropy for humans is a cause of concern? * 4.3, second paragraph: I thought that the guardrails, i.e. the constraints, are used to identify documents outside the targeted norm. DIKE is only involved in that process to a certain degree. * What are the intended applications? Just aligning the LLM generation? Content moderation? * How can DIKE be used as classification system and also as an agent in the adverserial process? * How are ethical principles encoded into a spectrum of emotions? * To what does E.1 related to? Maybe Figure 1, however there are seven rows and here are only six rows discussed. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns'] Ethical Review Concerns: The application of the framework is not clearly laid out, but its primary focus seems to be the realignment of existing document (which might be generated by LLM) within certain emotional constraints, which are then rewritten based on feedback provided by the framework. But the classification part could also be used for moderation of human written content, the article mentions specifically the escalation to a human moderator, if the adversarial process does not produce a conclusive result. This could even advance towards censorship or the forced reformulation of messages before publication. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, We respectfully request that you reconsider some of your comments, as they appear to misinterpret key aspects of our paper. For a constructive dialogue to proceed, we would like to address nine samples of your misunderstandings (there are more but space limited): 1. Relevance of a 2017 paper to ChatGPT Your comment: "ChatGPT was released in 2022, so how can a paper from 2017 be relevant?" This comment misunderstands how scientific citations function. The 2017 paper by Kirkpatrick et al. is cited for its work on catastrophic forgetting—a general neural network problem—not specifically in relation to ChatGPT. This represents a foundational concept in continual learning and is entirely valid as background theory. 2. Chronological misinterpretation Your comment: "How can an article from 2015 be discussed in 1890 and 1988 respectively?" This appears to be a misreading of a clear sentence. Our text states: "This concept is supported by Deisseroth (2015), discussed in James (1890), and corroborated by Minsky (1988)"—indicating that the idea appears in earlier and later works. We are not suggesting that James discussed Deisseroth's work. This misinterpretation raises concerns about the thoroughness of the review. 3. Context-dependent interpretation of entropy Your comment: "How can higher entropy be good for DIKE but bad for humans?" The paper explicitly explains this distinction. Higher entropy for DIKE demonstrates that it does not collapse into a single interpretation (unlike GPT-4's zero-shot approach), enabling better expressiveness. For human annotations, excessive variance reflects subjectivity and inconsistency. The interpretation of entropy differs depending on context and purpose. 4. ERIS functionality Your comment: "How does Table 1 represent ERIS, if it shows the algorithm how the interaction between DIKE and ERIS is implemented? How ERIS works is not shown at all?" Table 1 specifically describes the adversarial process between ERIS and DIKE in detail—including subtopic breakdown, iterative rebuttal, and modulation of contentiousness. Section 3.4 further elaborates on ERIS's cultural role. 5. Impact statement requirement Your comment: "No impact statement — should be there" Though we did not mark our impact statement in the end of Section 1, IT IS THERE, clearly, beneath our contribution summary. 6. Ethical concerns Your comment: "This could even advance towards censorship or forced reformulation of messages before publication." This represents a speculative ethical concern without grounding in the paper's actual scope. Our system is designed for LLM alignment—not mass surveillance or state censorship. The extrapolation into dystopian scenarios based on theoretical misuse extends beyond the scope of a scientific review. Current RLHF approaches could be similarly scrutinized according to your sentiment. You raised ethical concerns on this work, and if accurate, they should have applied equally to RLHF performed by all LLMs and hundreds of prior papers working on ethic alignment. 7. Ground truth and baselines Your comment: "No ground truth or baseline for Figure 2" Ground truth was clearly defined in Section 4.2: five human annotators plus GPT-4 and Gemini were used to produce averaged reference judgments. This methodology was explicitly described in the paper. 8. Applications Your comment: "What are the applications?" Applications are discussed explicitly throughout the paper: content moderation and ethical alignment. Sections 1, 3.2, and 5 specifically mention these applications, including escalation procedures to human reviewers. Ethical alignment represents a critical research area in contemporary AI development. 9. Map behaviors to emotions Your comment: "How are ethical principles encoded into a spectrum of emotions?" Well, Section3 provides the method thoroughly. Please refer to the first reviewer's comments that this mapping process is solidly grounded in psychology, cognitive science, and self-supervised learning: By reviewer LkWg: "The paper establishes theoretical validity through psychological foundations (Plutchik/Scherer emotion models [1,2]) and cognitive-linguistic theories (James-Lange [3], Schachter-Singer [4])... Key theoretical claims about emotion-behavior mapping (BEAM's linear emotion spectra) and architectural separation benefits (preventing catastrophic forgetting through independent components) derive logical support from cited neuroscience (Deisseroth's optogenetics [5]) and machine learning principles (Kirkpatrick's catastrophic forgetting studies [6])." 10. Editorial comments Thank you for your editorial comments. We will fix relevant ones promptly. We hope this clarification addresses your concerns and enables a more productive dialogue regarding the substantive aspects of our research.
Summary: This work introduces a novel three-branch framework (LLM/DIKE/ERIS) for ethical alignment in LLMs, inspired by governmental checks-and-balances. It decouples knowledge generation from ethical oversight and integrates emotion-driven behavioral modeling (via BEAM) with adversarial cultural adaptation. The framework aims to address RLHF’s limitations (e.g., catastrophic forgetting, cultural rigidity) by enabling interpretable, context-sensitive ethical reasoning. Pilot experiments on love letters demonstrate feasibility, with DIKE improving behavior classification accuracy by 11.3% over GPT-4 zero-shot. Claims And Evidence: 1. The notion that separating knowledge generation (LLM) from ethical oversight (DIKE) prevents catastrophic forgetting finds support in references to [1] and a notable 11.3% improvement in behavior classification accuracy over GPT-4 through pilot tests on love letters. However, it faces criticism due to the absence of a direct RLHF comparison using QA benchmarks, experiments focused on sentiment analysis that avoid knowledge-intensive tasks, and a notable absence of an ablation study to isolate DIKE's impact on catastrophic forgetting. 2. The claim that self-supervised learning reduces bias compared to RLHF is weakly supported by DIKE's design, lacking in-depth bias analysis and fairness metrics. Additionally, there is no examination of GPT-4's inherent biases, such as gender stereotypes in rewritten texts. 3. The paper claims that BEAM enables precise emotion-behavior mapping for ethical adjustments, with evaluation results on love letter datasets providing support. However, the data distribution may be too simplistic to reflect real-world scenarios. The authors should consider conducting studies on more complex ethical scenarios, such as hate speech detection, to strengthen their claims. [1] Kirkpatrick, J. et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences,114(13):3521–3526, 2017. Methods And Evaluation Criteria: The proposed three-branch framework effectively addresses the limitations of RLHF by introducing a structured separation of responsibilities: ethical oversight (DIKE), adversarial cultural adaptation (ERIS), and knowledge generation (LLMs). This separation is complemented by BEAM's emotion-behavior mapping, which quantifies linguistic patterns through self-supervised learning. The evaluation criteria are well-suited to the problem, combining quantitative metrics—such as an 11.3% improvement in classification accuracy over GPT-4 and higher prediction entropy, indicating a more nuanced understanding of behaviors—with qualitative validation. However, the study needs broader cultural validation rather than focusing solely on the easier love letters scenarios. Theoretical Claims: The paper establishes theoretical validity through psychological foundations (Plutchik/Scherer emotion models [1,2]) and cognitive-linguistic theories (James-Lange [3], Schachter-Singer [4]), rather than formal mathematical proofs. Key theoretical claims about emotion-behavior mapping (BEAM's linear emotion spectra) and architectural separation benefits (preventing catastrophic forgetting through independent components) derive logical support from cited neuroscience (Deisseroth's optogenetics [5]) and machine learning principles (Kirkpatrick's catastrophic forgetting studies [6]). While the adversarial DIKE-ERIS interaction draws conceptual strength from governmental checks-and-balances theory, the paper doesn't formally prove the framework's convergence properties or stability guarantees. The core theoretical contribution lies in the psychologically grounded architectural design rather than mathematical formalism. [1] Plutchik, R. A psychoevolutionary theory of emotions. Social Science Information, 21(4-5):529–553, 1982. [2] Scherer, K. R. What are emotions? and how can they be measured? Social Science Information, 44:693–727, 2005. doi: 10.1177/0539018405058216. [3] James, W. What is an emotion? Mind, 9(34):188–205, 1884. URL http://www.jstor.org.proxy.lib.sfu.ca/stable/2246769. [4] Lange, C. G. The emotions: A psychophysiological study. William & Wilkins, 1885. [5] Deisseroth, K. Optogenetics: 10 years of microbial opsins in neuroscience. Nature Neuroscience, 18(9):1213–1225, 2015 [6] Kirkpatrick, J. et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences,114(13):3521–3526, 2017. Experimental Designs Or Analyses: The experimental design demonstrates methodological coherence through three key validation strategies: 1. Emotion-behavior correlation analysis using GPT-4-generated rewrites to establish BEAM's mapping validity. 2. Comparative classification accuracy tests (DIKE vs GPT-4 vs humans) with inter-annotator agreement analysis to quantify behavioral understanding improvements. 3. Entropy measurements revealing DIKE's superior nuance in emotion spectrum modeling. While constrained by LLM content policies necessitating love letter substitutions for hate speech evaluation, the design compensates through psychological plausibility checks analyzing historical correspondence to verify multi-emotion coexistence patterns. However, the evaluation shows three limitations: 1. Cultural adaptation claims rest on architectural potential rather than cross-cultural dataset validation. 2. Small test set size (24 letters) reduces statistical power for behavioral classification metrics. 3. Absence of direct RLHF performance comparisons leaves framework advantages partially theoretical. 4. The evaluation does not cover more complex or challenging scenarios. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: The paper effectively underscores the limitations of RLHF, which, while enabling models to align with human preferences, often results in the models losing sight of their originally optimized responses. This persistent challenge is addressed in the paper through a novel approach that entirely decouples behavior modeling from knowledge representation, marking a significant departure from previous methods. Essential References Not Discussed: The paper covers most relevant literature. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Below, we address the main concerns raised. A. Complex Emotions and Emotion Modeling We acknowledge the critique regarding our treatment of complex emotions (e.g., pride, guilt, forgiveness). As discussed in Appendix D, we recognize the challenges in modeling these emotions, particularly their cultural variability and layered interpretations. Our approach deliberately begins with basic emotions to establish a clear and reliable foundation before extending to more complex emotional constructs. We distinguish that while basic emotions are largely unconscious and reflexive, complex emotions such as regret involve conscious deliberation. As noted in the paper, we propose modeling complex emotions as conscious behaviors. Given space constraints, we presented our preliminary work while acknowledging limitations and opportunities for future research. B. Data Scope and Self-Supervision Validity Regarding concerns about using GPT-4 to generate behavioral training data, we implemented several measures to mitigate potential feedback loop risks: 1. We incorporated rich stylistic diversity in our dataset (spanning over 50 authors across 200 years) 2. We conducted multi-agent validation using GPT-4, Gemini, and five human annotators (Section 4.2) We initially attempted to use hate speech datasets for our research, but encountered significant challenges: (1) none of the available datasets proved comprehensive, and (2) LLMs refused to process this data, citing policy violations. In today's increasingly polarized environment, using cultural or religious subjects for experiments risks inciting backlash. Our own experience attempting to publish a debate paper on a political issue resulted in negative emotions from university colleagues. We made earnest attempts to explore these avenues but ultimately had to reconsider our approach due to these practical constraints. We view self-supervised learning as advantageous for training data scalability. As demonstrated in Section 4.2, our approach can integrate human feedback, particularly within the ERIS module—a novel contribution of this work that provides context-based interpretation of ethical constitutions. C. Practicality and User Adaptability Our three-branch framework offers modularity and configurability: ERIS operates on-demand rather than continuously, reducing computational burden Guardrails are adjustable for domain or regional policies (Section 3.2) ERIS incorporates cultural variation, and DIKE's parameters can be customized by users or administrators While ethical oversight introduces some complexity, we believe the benefits of transparency, cultural flexibility, and safety justify this cost for sensitive applications. Regarding computational efficiency, our approach is comparable to current practices. LLMs already perform constitution checking (DIKE's function); we primarily add the ERIS judicial module for context-dependent interpretation, increasing token usage by at most 2x. Recent LLM developments (e.g., DeepSeek, GPT-4o, Claude 3.7) employ similar iterative validate-rethink loops, analogous to our DIKE-ERIS architecture. As NVIDIA CEO Jensen Huang noted at the March 18th GTC keynote, contemporary LLM architectures now routinely consume up to 20x more tokens and 150x more computation to improve reasoning quality and reliability. This suggests that enhancing AI quality necessarily incurs computational costs, a pattern observed since AlexNet. D. Theoretical Foundations We appreciate the recognition that our work establishes theoretical validity through psychological foundations (Plutchik/Scherer emotion models [1,2]) and cognitive-linguistic theories (James-Lange [3], Schachter-Singer [4]), rather than formal mathematical proofs. This interdisciplinary approach is indeed intentional. Many successful computational models, including CNNs, originated from neuroscience before their mathematical frameworks were fully developed. Before backpropagation and AlexNet demonstrated the validity of data-centric approaches, CNNs faced criticism for being computationally intensive and mathematically trivial. Our paper argues that while RLHF has strong mathematical foundations, it tends to address bias mitigation in a "whack-a-mole" fashion rather than drawing from behavioral science. Our theoretical foundation in psychology and cognitive science serves as inspiration for developing quantitative models. The complementary nature of interdisciplinary research strengthens our approach. Our quantitative model and unsupervised learning pipeline address training-data scalability challenges. We acknowledge there is much work to be done to validate and improve these methods, and we appreciate the opportunity to share these ideas with the research community. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Most of my concerns have been addressed. However, since the method proposed in the paper has only been tested on a small scale and lacks large-scale validation, I can only slightly increase my score.
null
null
null
null
null
null
Premise-Augmented Reasoning Chains Improve Error Identification in Math reasoning with LLMs
Accept (poster)
Summary: This paper looks into the problem of reference-free verification of LLM reasoning chains in the context of mathematical reasoning. Authors hypothesize that a step in a reasoning chain should be verified only under its premises, and propose constructing Premise-Augmented Reasoning Chains (PARC) to improve the traceability of a reasoning chain. Authors create a corresponding dataset, called PERL (Premises and ERrors identification in Language models) to test LLMs’ capability in identifying premises as well as the effectiveness of premises. Claims And Evidence: Through extensive experiments on Math datasets, authors show that verifying each step under its corresponding premises increases the accuracy of identifying errors and their types. In their error design, authors consider 3 error types specific to mathematical reasoning: mathematical error, logical inconsistencies, and accumulation errors. Authors highlight the importance of newly introduced accumulation errors, but do not provide evidence in their study (i.e. through the ablation study). Methods And Evaluation Criteria: yes Theoretical Claims: N/A Experimental Designs Or Analyses: Study is limited to the four popular mathematical datasets, and three commonly used LLM. Results are consistent across experiments. Supplementary Material: Yes, prompts in Appendices. Relation To Broader Scientific Literature: Authors elaborate on existing research, generalizing previously developed error taxonomy (i.e. Roscoe), and extensive work on using Verifiers to improve model's reasoning abilities. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, paper is clear and well written. As LLMs are often used as Judges and Verifiers, it is important to know how good they actually are at this task, how can we meta-evaluate their evaluation skills, and improve on them. Proposed approach can be used to enhance evaluations of the mathematical reasoning. In my opinion, this paper could benefit from additional ablation study. Other Comments Or Suggestions: N/A Questions For Authors: 1. You compare with baseline that classifies each reasoning step according to the predefined error taxonomy. In reality, there can be other error types outside those you pre-defined. I have noticed you did have "Other" error type in the baseline prompt (Table 12, p18). Do you have any statistics about the final error distribution within each dataset in the baseline setup? how large is the "other" group, and what's inside? 2. Paper could benefit from additional ablation study that would highlight the importance of the proposed taxonomy. In particular, it would be interesting to see how each aspect - mathematical error, logical inconsistencies, and accumulation errors - affects final error detection. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their kind remarks on the writing and experimental details of the paper. Here we address their concerns **concern 1** - Study is limited to the four popular mathematical datasets, and three commonly used LLM **response** - Please note that we used three popular model families, and for each family we included at least 2 models of different scales, making the number of models=6. Which establishes the robustness of our method across models. Further here we attach some additional experiments done on the recently released processbench dataset for the error identification task, where across model classes we see significant improvements in error identification, further showcasing our method’s reliability Gsm8k | Model | Method | Correct acc | Wrong acc | F1 | Δ (Delta) | |------------|----------|-------------|-----------|------|-----------| | Qwen 7B | Baseline | 66.3 | 36.7 | 47.2 | – | | | Ours | 60.1 | 38.6 | 47.0 | -0.2 | | Qwen 72B | Baseline | 98.4 | 61.4 | 75.6 | – | | | Ours | 97.8 | 59.7 | 74.1 | -1.5 | | Qwen 32B | Baseline | 97.9 | 43.0 | 59.8 | – | | | Ours | 95.9 | 55.1 | 70.0 | 10.2 | | Llama 8B | Baseline | 17.1 | 36.7 | 23.3 | – | | | Ours | 33.7 | 37.8 | 35.6 | 12.3 | | Llama 70B | Baseline | 77.7 | 57.5 | 66.1 | – | | | Ours | 89.6 | 70.0 | 78.6 | 12.5 | Math | Model | Method | Correct acc | Wrong acc | F1 | Δ (Delta) | |------------|----------|-------------|-----------|------|-----------| | Qwen 7B | Baseline | 46.0 | 25.4 | 32.7 | – | | | Ours | 45.6 | 41.2 | 43.3 | 10.6 | | Qwen 72B | Baseline | 88.5 | 33.7 | 48.8 | – | | | Ours | 86.7 | 53.9 | 66.5 | 17.7 | | Qwen 32B | Baseline | 90.0 | 22.4 | 35.9 | – | | | Ours | 86.9 | 53.9 | 66.5 | 30.7 | | Llama 8B | Baseline | 5.6 | 19.1 | 8.7 | – | | | Ours | 11.0 | 27.5 | 15.7 | 7.0 | | Llama 70B | Baseline | 32.4 | 32.8 | 32.6 | – | | | Ours | 61.6 | 55.4 | 58.3 | 25.7 | **concern 2** - how large is the "other" group, and what's inside **response** - We observed that the frequency of the other error type in our ground truth dataset was quite low, where only 10 steps were marked as Other. Upon manual inspection, it revealed that the other group is manifested in the forms of - “The solution becomes stuck in a repetitive loop” and “Step is incomplete ”. **concern 3** - In particular, it would be interesting to see how each aspect - mathematical error, logical inconsistencies, and accumulation errors - affects final error detection **response** - Since we are restricted by 5000 characters, we request the reviewer to kindly see the response to the third concern by reviewer xai3, where we shared the detailed numbers. **concern 4** - Authors highlight the importance of newly introduced accumulation errors, but do not provide evidence in their study **response** - A holistic evaluation of LLM reasoning should consider the entire reasoning chain rather than relying solely on a binary correct/incorrect outcome of the final answer. Reasoning chains may contain subtle intermediate errors despite following a globally correct plan, ultimately rendering the solution incorrect—a nuance overlooked by final-answer correctness metrics. Prior works like PRM800K and ProcessBench have typically annotated reasoning chains only up to the first erroneous step, discarding subsequent steps due to ambiguity. To our knowledge, we are the first to formally introduce the concept of accumulation errors, enabling a more comprehensive evaluation of reasoning chains. Similar to how teachers award partial credit for nearly correct answers, evaluation frameworks should recognize when the overall reasoning plan is sound despite minor mistakes, assigning partial credit accordingly. Accumulation errors, where a step is locally correct but built on flawed premises, explicitly capture this subtlety. Identifying accumulation errors highlights how earlier mistakes compromise the reliability of the reasoning chain, making it essential to incorporate these errors into holistic scoring methods.
Summary: The authors explore how to improve error identification in reasoning chains, which consist of multiple individual reasoning steps. They start by converting the reasoning chain into a directed acyclic graph, called Premise-Augmented Reasoning Chains (PARC), where the nodes that are reasoning steps and the edges indicate the dependency on premises from previous reasoning steps. They present two LLM-based approaches: providing the full reasoning chain until the current step and asking the LLM to identify the premises of the current step and alternatively building pairs of the current step with each of its predecessors and asking the LLM, whether a given step is the premise of the current step. Additionally to mathematically errors within a reasoning step and logical inconsistencies, where a reasoning step is not consistent with its premises, the authors also propose a new error type: the accumulation error, where the reasoning itself is correct, but depends on incorrect premises. The authors also derive a dataset, called Premises and ERrors identification in Language models (PERC), which they intend to publish, that is based on 607 positive, negative and synthetic negative samples, which they subsequent use to evaluate seven different models. They find that LLMs can identify the premises reasonably well and that their approach improves error identification. ## update after rebuttal While the rebuttals answered some of my questions, the addition of another dataset has not convinced me enough to raise my score. Claims And Evidence: The claims are reasonably support by their evaluation. Methods And Evaluation Criteria: I think the methodology is sound in general and the authors evaluate seven models, showing that their approach at least generalizes in this dimension. The chosen metrics seem appropriate. * The benchmark seems a bit small: only 607 datapoints. * No external dataset used, so it is unclear how much their approach would generalize. Theoretical Claims: Yes, to a certain degree, I have checked their formulas, which are reasonably most of the time, but contain some errors as well some notation mistakes. * line 143, second column: I believe the capital R should be a lowercase r to be consistent. * line 202: wrong symbol, if my understanding is correct - should be I instead of F * algorithm 1: input and output of algorithm should be a small r? Experimental Designs Or Analyses: The experiments are reasonably well designed, albeit follow their approach very much. The author provide some end-to-end results, but only for their own dataset. The authors supply some form of ablation study, where they test different approaches. * If my understanding is correct, the baseline performance of these models is missing, making it harder to judge by how much the whole approach improves upon the base models. * No discussion of the additional compute resources, time and costs. Supplementary Material: No, I only read the appendix. Relation To Broader Scientific Literature: The ideas are for the most part novel. The use of the DAG to model the dependencies between thoughts/reasoning step is somewhat akin to Graph of Thoughts, however is here applied after the generation is finished instead of during the generation, which is then used to identify errors. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: The writing is clear and the paper is easy to follow. While the evaluation is detailed to a certain degree, for example the number of evaluated models, other aspects such as external datasets or performance of baseline models are missing. Other Comments Or Suggestions: * I suggest to also capitalize "Reasoning" in the title. * line 16, second column: "(CoT; Wei et al. (2023))" - wrong cite command * line 111: "section 5" - "Section" should be capitalized to be consistent * line 143, second column: I believe the capital R should be a lowercase r to be consistent. * 3.1/3.2/3.3: steps s are sometimes bold and sometimes not * line 250/251: "Algorithm~1" should be on a single line * line 254: should be "Tables" instead of "Appendix" (or the correct Appendix A.5 should be referred to) * 5.1: recall is sometimes used capitalized and sometimes not * table 3, caption: "Error identification" - should not be capitalized; "Premises" -> "Model Premises" to be consistent * line 410: "Identification :" - extra white space before the colon * table 4, caption: "Error identification" - should not be capitalized * table 5: unclear from the caption what data(set) is shown * line 406/407, second column: "Table~3" - should be on a single line * line 414/415, second column: "Table~4" - should be on a single line * references: * "Alexa arena: A user-centric interactive platform for embodied ai" - at least abbreviations should be capitalized; URL cuts into the margin * "OpenAI o1 System Card" - cited twice * I suggest to cite the conference versions for example CoT (NeurIPS'22) or ToT (NeurIPS'23) * "Metamath: Bootstrap your own mathematical questions for large language models" - cited differently than the other arXiv papers * line 853: should be "Tables" instead of "Appendix" * line 1013: missing brackets around "step" Questions For Authors: * Why do longer reasoning chains make it harder to verify individual steps? * If the nodes are the reasoning steps, how can the edges link to premises? * Why are there more PARCs than chains in the dataset? (A2) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of the writing and experiments. Here we address their concerns **Typos** - Thanks for bringing those to our notice, we will definitely fix those in the final version. **concern 1** - Dataset is a bit small, no external datasets used **response** - To further provide proof that our method is effective we provide results on the popular Processbench dataset, which has step level annotations done by humans. Gsm8k (400 examples) | Model | Method | Correct acc | Wrong acc | F1 | Δ (Delta) | |------------|----------|-------------|-----------|------|-----------| | Llama 8B | Baseline | 17.1 | 36.7 | 23.3 | – | | | Ours | 33.7 | 37.8 | 35.6 | 12.3 | | Llama 70B | Baseline | 77.7 | 57.5 | 66.1 | – | | | Ours | 89.6 | 70.0 | 78.6 | 12.5 | | Qwen 7B | Baseline | 66.3 | 36.7 | 47.2 | – | | | Ours | 60.1 | 38.6 | 47.0 | -0.2 | | Qwen 32B | Baseline | 97.9 | 43.0 | 59.8 | – | | | Ours | 95.9 | 55.1 | 70.0 | 10.2 | | Qwen 72B | Baseline | 98.4 | 61.4 | 75.6 | – | | | Ours | 97.8 | 59.7 | 74.1 | -1.5 | Math (1000 examples) | Model | Method | Correct acc | Wrong acc | F1 | Δ (Delta) | |------------|----------|-------------|-----------|------|-----------| | Llama 8B | Baseline | 5.6 | 19.1 | 8.7 | – | | | Ours | 11.0 | 27.5 | 15.7 | 7.0 | | Llama 70B | Baseline | 32.4 | 32.8 | 32.6 | – | | | Ours | 61.6 | 55.4 | 58.3 | 25.7 | | Qwen 7B | Baseline | 46.0 | 25.4 | 32.7 | – | | | Ours | 45.6 | 41.2 | 43.3 | 10.6 | | Qwen 32B | Baseline | 90.0 | 22.4 | 35.9 | – | | | Ours | 86.9 | 53.9 | 66.5 | 30.7 | | Qwen 72B | Baseline | 88.5 | 33.7 | 48.8 | – | | | Ours | 86.7 | 53.9 | 66.5 | 17.7 | **concern 2** - the baseline performance of these models is missing **response** - In our paper we have two tasks - premise identification and error identification. The former is a novel task in itself, hence we don’t explicitly compare against any baseline as such. Instead we compare the aggregative and dyadic approaches as an ablation to how premises can be identified. For the error identification task, the baseline is when the entire context is fed into the LLM as compared to our approach where we only use the premises generated (which is a standard LLM as a judge scenario, please refer to the Table 3 the rows tagged "full context" is the baseline here). We would also like to highlight that existing popular frameworks like Roscoe and Receval assign a chain level scores, and hence are not directly comparable to our use case. **concern 3** - discussion of the additional compute resources, time and costs **response** - As we query the judge LLM twice we observe a 2X latency as compared to the baseline. We empirically verified this for the Qwen32b model, across all 4 datasets. **question 1** - Why do longer reasoning chains make it harder to verify individual steps? **response** - The longer a reasoning chain gets, the longer the context is for the later steps. This further implies that a lot of irrelevant information (non-premises) is fed into the context, making it hard for the model to reason whether the step was correct or not. **question 2** - If the nodes are the reasoning steps, how can the edges link to premises? **response** - In our formulation in the DAG - each node is a step, and directional edges capture dependencies between steps (nodes). More specifically, if a step i is premise to step j, there is a step (i->j) **question 3** - Why are there more PARCs than chains in the dataset? (A2) **response** - Indeed it should be 607 and we will correct this in our final paper
Summary: This paper studied the step-level verification of CoT reasoning, and proposed a PARC framework that converted linear reasoning chain into DAG by introducing premise links. Based on the framework, the authors defined a new error type named accumulation error, and constructed PERL dataset to evaluate the framework. The experiments demonstrated that PARC helped to identify step-level errors in CoTs. ## update after rebuttal The responses have addressed most of my concerns. After reading other reviews and authors' responses, I decide to keep my score. Claims And Evidence: The claims are well supported by the experimental results. The authors specially constructed several datasets and conducted extensive experiments to demonstrate the effectiveness of the proposed framework, including the premise identification and error identification. Methods And Evaluation Criteria: Overall the proposed method works for CoT verification. However, there are still some issues that are not clearly explained in method design. 1. Since the premise is based on the step, it would be better to make it clearer what is an intermediate step defined, and how to get the step list from a complete CoT. 2. How did the authors ensure or evaluate whether the premise satisfied the three properties in page 3. And how to evaluate whether a step is verifiable. 3. What’s the necessity of the proposed accumulation error. According to the paper, if the error exists, it means that there are native errors, which is enough to indicate the incorrect reasoning. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: The experiments are extensive and solid. I have one concern that in dataset construction, it may be problematic to treat the CoT as correct based on the final answer, due to possible step errors as studied in the paper. Supplementary Material: The authors provided more experimental details, prompts, and more experimental results in the appendix. Relation To Broader Scientific Literature: The paper focused on the verification of the CoT based on premise DAG. Compared with existing works, the authors proposed an automatic premise recognition method without relying on predefined CoT template and hurting reasoning, and propose a novel error type focusing on error propagation. Essential References Not Discussed: Related works are well cited. Other Strengths And Weaknesses: In summary, the strengths of the paper are listed as follows. 1. The paper proposed a novel premise DAG-based method PARC for step-level verification of CoT reasoning. 2. The authors conducted extensive experiments to demonstrate the effectiveness of the framework. The weakness of the paper are as follows. 1. Some details of method design are unclear in the paper. See the "Methods" and "Experiments" parts. Other Comments Or Suggestions: None. Questions For Authors: 1. How did the authors ensure or evaluate whether the premise satisfied the three properties in page 3. And how to evaluate whether a step is verifiable. 2. What’s the necessity of the proposed accumulation error. As the native errors are enough to indicate the incorrect reasoning. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their kind remarks on the novelty and thorough experimental setup of our method. Here we try to address their concerns. **concern 1** - Since the premise is based on the step, it would be better to make it clearer what is an intermediate step defined, and how to get the step list from a complete CoT. **response** - In our work, we explicitly prompt the generator model to answer in a step by step format, with formatting instructions in the prompt. The generated solution looks like Step 1: … Step 2: … Finally we do a simple regex to extract the steps. **concern 2** - How did the authors ensure or evaluate whether the premise satisfied the three properties in page 3. **response** - In our work, we prompt the highly capable O1-preview model to generate the ground truth premises and later manually verify the generated premises on whether they satisfy the verifiability and minimality conditions. **concern 3** - What’s the necessity of the proposed accumulation error? **response** - A holistic evaluation of LLM reasoning should consider the entire reasoning chain rather than relying solely on a binary correct/incorrect outcome of the final answer. Reasoning chains may contain subtle intermediate errors despite following a globally correct plan, ultimately rendering the solution incorrect, a nuance overlooked by final-answer correctness metrics. Prior works like PRM800K and ProcessBench have typically annotated reasoning chains only up to the first erroneous step, discarding subsequent steps due to ambiguity. To our knowledge, we are the first to formally introduce the concept of accumulation errors, enabling a more comprehensive evaluation of reasoning chains. Similar to how teachers award partial credit for nearly correct answers, evaluation frameworks should recognize when the overall reasoning plan is sound despite minor mistakes, assigning partial credit accordingly. Accumulation errors, where a step is locally correct but built on flawed premises, explicitly capture this subtlety. Identifying accumulation errors highlights how earlier mistakes compromise the reliability of the reasoning chain, making it essential to incorporate these errors into holistic scoring methods. **concern 4** - it may be problematic to treat the CoT as correct based on the final answer, due to possible step errors as studied in the paper. **response** - We completely agree with the reviewer on this - in our work, we observed that even in the positive reasoning chains, there were a few steps that were annotated as error by the O1 model, indeed LLMs suffer from False Positives, which is established from the work - https://arxiv.org/pdf/2502.06217 . In GSM8k - 4 false positives out of 50 In MATH - 4 out of 50 Metamathqa - 5 out of 50 Orcamath - 4 out of 50 When we release our dataset, we will make sure to flag them as false positives. --- Rebuttal Comment 1.1: Comment: Thanks for the time on responses. The responses have addressed most of my concerns. According to the responses, the proposed method can not be applied on more general CoTs, and the authors have not technically ensured the satisfcation of premise property, so I would keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their time and feedback. Here is our response to their concerns 1. **proposed method can not be applied on more general CoTs** We would like to highlight that we also did experiments on the processbench dataset (the results are available on **concern 1** by **reviewer kjs7**), where the chain of thoughts are **not** generated in a step by step manner and a simple delimiter is used to separate steps. And our method consistently outperforms the baseline in that case as well. Our experiments show that PARC consistently outperforms verification baselines irrespective of the style in which the reasoning chain was generated. 2. **authors have not technically ensured the satisfaction of premise property** Unfortunately reasoning chains are in natural language and hence highly ambiguous. Which is why we annotate a high quality dataset against which predicted premises can be compared. Here is how we ensured the satisfaction of the premise properties **For our PERL dataset:** We ensure that both the premise properties (verifiability and minimality) are satisfied for our PERL dataset. At the time of construction of the dataset, we manually check the generated data to make sure the conditions are met.(detailed in lines 314-328) **At inference time:** At inference time we compare the premises generated by the models with the ground truth premises to report our accuracy metrics. Since we already have a reliable set of premises, these metrics capture well how good the models are in predicting premises.
Summary: This paper introduces a new category of errors (accumulation errors) and Premise-Augmented Reasoning Chains (PARC) as a method to improve error identification in mathematical reasoning with Large Language Models (LLMs). To evaluate this method, the authors construct PERL (Premises and ERrors identification in Language models), a benchmark dataset containing annotated premise links and error types in mathematical reasoning chains. Their results demonstrate that LLMs can achieve ≥90% recall in premise extraction, highlighting the effectiveness of their method. ## update after rebuttal I'd like to thank the authors for their response and keep my positive rating. Claims And Evidence: The authors claim that “off-the-shelf LLMs can detect premises for a given step with high accuracy for mathematical reasoning.” However, my main concern lies in the low precision of premise identification and mapping, as reported in Tables 1 and 2. While I acknowledge the high recall achieved by LLMs, precision remains a critical factor, particularly given the minimality requirement defined in Lines 151–164—where a premise set should be minimal such that removing any element renders the corresponding step unverifiable. With precision ranging from 60% to 80%, the identified premises do not seem to consistently meet this minimality criterion, leaving the authors' claim insufficiently substantiated. Methods And Evaluation Criteria: The PARC framework is well-motivated and introduces an intuitive premise-based verification process for mathematical reasoning. The PERL dataset encompasses a diverse set of math word problems from GSM8K, MATH, Orca-Math, and MetaMathQA, ensuring a broad range of difficulty levels. Additionally, the chosen evaluation metrics—precision, recall, and F1 for premise detection, along with accuracy for error identification—are appropriate for assessing the effectiveness of the proposed approach. Theoretical Claims: The formalization of premise extraction and mapping problem (Section 3.1) is reasonable. However, the paper lacks in-depth theoretical guarantees or analysis on the solution quality of the proposed algorithm (Algorithm 1), leaving open questions about its optimality and robustness in premise identification. Experimental Designs Or Analyses: The paper presents a thorough experimental design, with evaluations conducted across multiple datasets such as GSM8K and MATH, ensuring a diverse assessment of the proposed method. However, a major concern lies in the experimental setup described in Section 5.2, particularly in Lines 373–378, where the authors state that Mathematical Error and Logical Inconsistency are merged into a single error type, Error, due to their “thin boundary”. Since these are well-established and distinct error types, the inability of models to differentiate between them suggests a limitation in model capacity rather than an inherent issue with error categorization. Merging these categories appears to artificially simplify the task, potentially leading to an ad-hoc and unfair evaluation of performance. A more rigorous analysis preserving the original error distinctions would provide a clearer assessment of the model’s reasoning capabilities. Supplementary Material: The supplementary materials provide additional resources, including detailed descriptions of the dataset, model prompts, and further experimental results. Relation To Broader Scientific Literature: This work introduces a new error taxonomy by defining accumulation errors and situates itself within existing research on reasoning verification. While building on prior work, it presents a novel structured verification and error detection approach, contributing a more systematic method for identifying and analyzing reasoning errors in math reasoning tasks. Essential References Not Discussed: Graph-of-Thoughts [1] also discusses non-linear reasoning structures, which relate to PARC’s DAG-based structure but is not cited. A discussion would be helpful. [1] Besta, Maciej, et al. "Graph of thoughts: Solving elaborate problems with large language models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 16. 2024. Other Strengths And Weaknesses: See previous sections. Other Comments Or Suggestions: See previous sections. Questions For Authors: See previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their kind remarks on the novelty and thorough experimental design. Here we address the concerns **concern 1** - low precision of premise identification and mapping **response** - We would like to highlight that our claim is originated by the high recall primarily. In the context of error identification, verifiability (recall) is more important than minimality (precision), since missing even a single premise can compromise the step’s and subsequent steps' verifiability. Further, upon closer inspection we observe that, low precision often results from the ground truth premise set being very small. For example, if step 7 has ground truth premises {1} and the model predicts {0,1}, precision is 50%, even though the context remains well-pruned. **concern 2** - lack of theoretical guarantees **response** - We primarily take an empirical route, and as already pointed out by you, we do thorough experiments across model families, scale and datasets to prove robustness of our method. We hope that the released dataset can help the community to foster more research in this direction as well. To further provide proof that our method is effective we provide results on the popular Processbench dataset, which has step level annotations done by humans. **Please refer to our response to concern 1 raised by Reviewer kjS7 for the Processbench results.** **concern 3** - Mathematical Error and Logical Inconsistency are merged **response** - When we say that the boundary between them is “thin”, we imply that even a simple mathematical calculation (or manipulation) could easily be considered as a logical error (due to fault in mathematical logic). However we agree that for completeness, it's essential to have a detailed breakdown per error category, which we present here. ( we will include these in the final version of the paper) GSM8k | Model | Context | Logical Error | Mathematical Error | |------------------|----------------|---------------|--------------------| | Llama 3.1 8b | Full context | 29.9 | 41.1 | | | Model premises | 55.6 | 64.9 | | Llama 3.1 70b | Full context | 48.5 | 72.4 | | | Model premises | 79.6 | 64.5 | | GPT4o-mini | Full context | 33.4 | 58.6 | | | Model premises | 64.7 | 65.8 | | GPT-4o | Full context | 44.4 | 52.4 | | | Model premises | 64.1 | 74.0 | | Qwen 7b | Full context | 21.4 | 20.7 | | | Model premises | 59.0 | 39.1 | | Qwen 72b | Full context | 31.7 | 44.5 | | | Model premises | 69.9 | 64.6 | MATH | Model | Context | Logical Error | Mathematical Error | |------------------|----------------|---------------|--------------------| | Llama 3.1 8b | Full context | 46.4 | 45.1 | | | Model premises | 57.9 | 63.7 | | Llama 3.1 70b | Full context | 46.0 | 75.0 | | | Model premises | 79.0 | 66.3 | | GPT4o-mini | Full context | 54.1 | 61.5 | | | Model premises | 83.1 | 67.4 | | GPT-4o | Full context | 55.1 | 59.4 | | | Model premises | 63.4 | 64.2 | | qwen 7b | Full context | 27.7 | 34.7 | | | Model premises | 77.6 | 49.2 | | qwen 72b | Full context | 47.8 | 63.8 | | | Model premises | 77.7 | 66.8 | **concern 4** - Graph-of-Thoughts [1] also disc… **response** - Thanks for raising this. Graph of thought induces structure at inference time, while we do it after inference, and later use that to improve error identification. But there is definitely a resemblance between them, and we will include them in our camera ready version.
null
null
null
null
null
null
What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities
Accept (oral)
Summary: This submission introduces OmniBench, aiming to provide a scalable task synthesis paradigm along with an evaluation framework for agent capabilities across ten dimensions. Due to the inherent complexity of agent trajectories, it is challenging to effectively construct large-scale, high-quality trajectory datasets. Existing trajectory datasets in agent community are typically three orders of magnitude smaller than GUI grounding datasets (thousands vs. millions), struggling to provide sufficient supervision to enhance agents' navigation capabilities across various environments. To construct tasks more systematically, the authors propose the Task Graph, a novel task modeling approach. They treat frequently occurring subtasks as nodes in the task graph. First, they synthesize subtasks and apply cross-verification to obtain gold subtasks. Then, they synthesize tasks through rule-based subtask composition. This synthesis pipeline not only avoids the need for strong task planning capabilities in top-down task decomposition but also exponentially scales up task synthesis by reusing synthetic gold subtasks. Additionally, the authors quantifies task complexity based on five fundamental attributes of task graph (e.g., depth, number of nodes, number of edges, etc.). By constraining the complexity levels of the five dimensions, the authors filter test tasks for evaluating 10 capabilities and reveal the fine-grained performance of 12 agents, providing guidance to the community on the direction of future optimization. ## update after rebuttal Authors' rebuttal have solved my concerns. I think previous score is high enough and I will keep my rating. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. I have reviewed the submittion and did not find any inappropriate claims. Methods And Evaluation Criteria: Yes, I think the methods and evaluation criteria make sense. Considering agent tasks as a directed acyclic graph provides a rational foundation for subsequent task synthesis, task complexity definition, and agent capability design. Additionally, Figure 6 shows that OmniEval aligns with human evaluation, demonstrating that the evaluation criteria also make sense. Theoretical Claims: I have reviewed the proofs for the theoretical claims and consider them to be correct. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. As a benchmark, I consider the experiments to be sound and valid. If the authors can further validate the quality of the training data, this submission would have a greater impact. Supplementary Material: Yes, I have reviewed all the content listed by the authors in the supplementary material. I notice that the authors have open-sourced the evaluation framework for the tasks, which includes the implementation of the graph-based evaluator, consistent with what is shown in Figure 5. Additionally, I find that they have also released the prompts used for task synthesis and the code for environment exploration. In the appendix, the authors introduce the applications in the environment, the task synthesis pipeline, and the details of evaluation. Relation To Broader Scientific Literature: OmniBench provides a more fine-grained evaluation of agents, unlike most benchmarks that focus on task success rates. It evaluates ten diverse capabilities, including dependency identification and long-range planning, similar to benchmarks like MMBench designed for MLLMs. However, in the agent domain, this might be the first attempt I am aware of to define the fine-grained capabilities required for agents. Essential References Not Discussed: I believe the inspiration for evaluating various agent capabilities in the submission might come from benchmarks [1, 2] used to evaluate MLLM capabilities. However, this is not explicitly discussed in the submission. Including such a discussion would make the submission more impactful. [1] MMBench: Is Your Multi-modal Model an All-around Player? [2] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension Other Strengths And Weaknesses: ## Strengths S1. The authors propose a data collection pipeline to automatically synthesize high-quality demonstration trajectories and evaluation functions. The author designed three quality control modules to ensure the data quality, achieving a 91% human acceptance rate. The proposed bottom-up synthesis provides a highly valuable insight, which can be important for the community: fully leveraging the repeatedly occurring sub-trajectory in trajectory data. S2. The authors introduce the concept of the Task Graph, allowing for a systematic quantification of task complexity based on fundamental graph properties (e.g., depth, number of edges, etc.). By constraining the complexity levels of the five dimensions, the authors filter test tasks for evaluating 10 capabilities. S3. Experiments are conducted to evaluate agents across 10 capability dimensions, revealing the fine-grained performance differences between open-source and closed-source agents, providing precise directions for future optimization. ## Weaknesses W1. On the left side of lines 430 to 434, the authors state that "By incorporating intents into the prompt, we observed a marked improvement in the agents’ performance on tasks designed to evaluate planning." Additionally, Figure 9 illustrates the impact of intent on OS-Atlas-Base and Uground-V1 in the agent setting, showing a significant improvement in planning capabilities when task intent is incorporated. However, the intent here is actually provided as input to the planner (i.e., GPT-4o), and the authors don’t test the effect of feeding intent into end-to-end agents. If task intent is only effective on GPT-4o, it is hard to prove that task intent is as meaningful as stated in the submission. W2. The subfigure in the bottom left of Figure 1 shows that the graph-based data is of high quality, but I couldn't find any related experimental descriptions in the article. I believe that comparing the synthetic trajectories with human demonstrations could reveal the quality of the OmniBench dataset. Therefore, I suggest that the authors train the agent separately on these two types of data and evaluate its performance on other benchmarks. W3. The authors define task complexity from five dimensions in Table 2 but don’t evaluate the performance of agents on tasks within a specific dimension. The experimental results are valuable to the community, as they could reveal whether certain models exhibit preferences for specific types of tasks. Other Comments Or Suggestions: 1. The metric ND under "Overall" in Table 4 has not appeared before. I suspect this is a typo by the authors. 2. Change 'we introduced OmniBench' in line 429 on the right side, 'we observed a marked' in line 431 on the left side, and 'We compared' in line 378 on the left side to the present simple tense. 3. The line spacing around line 266 seems too tight and should be adjusted. 4. In the subfigure at the bottom left of Figure 1, the authors may have forgotten to replace 'x' with the actual values. Additionally, this figure does not seem to intuitively illustrate that task complexity has three levels and its relationship with agent capabilities. Questions For Authors: Q1. How is the "predefined resource list for determining subtask inputs and outputs" mentioned in the submission implemented? Does the predefined resource list come from the LLM? What are the design principles for the resources? Q2. I noticed that the appendix of OmniBench introduces several apps in the environment, including some that require an internet connection (e.g., Skype, Mail, DeepL). How do the authors ensure that the environment remains consistent for each evaluation? I am curious about this because interactive benchmarks [1, 2, 3] typically include only offline apps/websites (e.g., Microsoft Office, VS Code) to ensure a consistent environment for each evaluation. [1] AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents [2] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments [3] WebArena: A Realistic Web Environment for Building Autonomous Agents Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for appreciating our paper as comprehensive and identifing highly valuable insight, your constructive comments and suggestions are valuable to us. Below is our detailed response to clarify the points you raised. **Q1: The Effect of Task Intent on Planning.** **A1:** The task intent is to enhance the planning capability of the planner in a plug-and-play way. We further explore the generalizability of task intents on the **open-source and closed-source models**. (1) For open-source models, we fine-tune OS-Atlas-Base-4B and UGround-V1-7B in the same dataset with and without task intent, respectively. As shown in the following table, incorporating task intent in the training data significantly improves the model's planning performance on OmniBench. This indicates that task intent has the potential to serve as fine-tuning data to guide models in improving their planning capabilities. |Models| Parallel Planning|Long-Range Planning| |-|-|-| |Omni-OS-Atlas-Base-4B|24.2|33.0| |**+intent tuning**|**25.7 (+1.5)**|**34.9 (+1.9)**| |Omni-UGround-V1-7B| 33.2|31.3| |**+intent tuning**|**34.4 (+1.2)**|**32.6 (+1.3)**| (2) For closed-source models, we use prompts with and without task intent to Qwen-VL-Max, Gemini-2.0-Flash, and Claude-3.5-Sonnet for planners, with UGround-V1-7B serving as the grounding models. |Models|Parallel Planning|Long-Range Planning| |-|-|-| |Qwen-VL-Max|21.9|20.8| |**+intent prompt**|**24.5 (+2.6)**|**23.5 (+2.7)**| |Gemini-2.0-Flash|23.1|22.7| |**+intent prompt**|**28.9 (+5.8)**|**26.7 (+4.0)**| |Claude-3.5-Sonnet| 24.2|23.7| |**+intent prompt**|**30.6 (+6.4)**|**28.1 (+4.4)**| &nbsp; **Q2: Compare with Human Demonstration Trajectories.** **A2:** Thank you for nicely pointing out that Omnibench may obtain high-quality graph-based tasks. To address this, we compare human demonstration trajectories from the GUIAct dataset and synthetic trajectories from OmniBench. We conduct experiments on the OmniAct-Web dataset and apply the same fine-tuning settings for fair comparison. |Models|Type-Web|Grounding-Web|SR-Web| |-|-|-|-| |InternVL2-4B|47.51|51.34|24.39| |Qwen2-VL-7B|89.22|85.94|78.58| |SeeClick|86.98|75.48|68.59| |OS-Atlas-4B|88.56|82.00|73.91| |&nbsp; +1k human demostrations|88.64|82.34|74.06| |&nbsp; +1k synthesized trajectories|**88.71**|**82.50**|**74.12**| |UGround-7B-V1|90.16|86.98|79.85| |&nbsp; +1k human demostrations|90.23|87.19|80.02| |&nbsp; +1k synthesized trajectories|**90.29**|**87.28**|**80.11**| The results in above table show that **graph-based trajectories in OmniBench are of high quality and lead to better performance**. &nbsp; **Q3: Performance per Complexity Dimension.** **A3:** Thanks for your constructive suggestions. We report the model's performance across different levels within each complexity dimension in the table. |Models|D-Easy|D-Medium|D-Hard|B-Easy|B-Medium|B-Hard|I-Easy|I-Medium|I-Hard|K-Easy|K-Medium|K-Hard|H-Easy|H-Medium|H-Hard| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |Aguvis-7B|32.8|27.6|24.3|41.2|36.8|30.6|49.5|36.9|25.3|38.4|32.5|27.6|37.9|33.6|29.7| |OS-Atlas-Pro-7B|32.3|26.8|23.7|39.1|31.0|25.4|44.3|34.8|21.8|33.9|28.4|24.3|34.5|28.1|25.6| |ShowUI-2B*|34.0|28.3|25.6|41.3|32.7|28.2|45.9|36.6|25.4|37.8|32.6|27.4|37.6|32.0|28.1| |OS-Atlas-Base-4B*|32.7|29.1|24.9|35.2|32.4|27.6|48.2|37.5|26.7|39.1|34.2|28.9|43.1|38.4|33.2| |UGround-7B*|34.1|30.0|27.1|44.6|38.3|32.4|53.0|39.4|27.2|42.3|36.4|32.6|35.7|28.8|25.5| where D means Dependency, B means Branching, I means Instruction, K means Knowledge, and H means Hierarchical. As can be seen, the model's performance gradually decreases as the complexity increases, indicating that the **task complexity is reasonable and independent**. &nbsp; **Q4: How to design predefined resource lists for determining subtask inputs and outputs?** **A4:** Thanks for your interest in the details of our subtask determination. To capture the specific state of the virtual environment, we manually design cold start resources based on the principles of **subtask definition and task transition** to ensure accuracy and facilitate seamless integration between subtasks. To further explain the implementation, we provide additional examples of input and output resources in Figure 1 via the anonymous link: https://anonymous.4open.science/r/OmniBench_rebuttal-A4C7/r4.md **Q5: How to ensure environmental consistency?** **A5:** Thanks for your comment, we also believe this verification is important. Since there is an inherent trade-off between environmental realism and consistency, when introducing web-based applications~(e.g., Skype, Mail, DeepL) to the OmniBench evaluation environment, we will **use firewalls to block their outbound traffic and preload the necessary page state** to provide an environment that ensures realism and consistency. **Q6: Writing issues.** **A6:** We sincerely thank you for constructive suggestions on expression, organization, figures, and citations. We will correct them in next version. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I believe it has addressed all my concerns. Once again, I find the ideas meaningful and thought-provoking—particularly regarding the multi-dimensional capability evaluation. Due to the inherent differences between agent tasks and traditional LLM tasks, it's difficult to directly design specific test sets for each target capability, as is commonly done in previous LLM benchmarks. So the idea of leveraging graph-based auxiliary information to enable such evaluation is insightful for the community. I hope the authors will open their code to the community. I'd be happy to champion this paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our work. Your feedback is truly encouraging and means a great deal to us. We will release the code and dataset upon acceptance of the paper to support progress within the community. Once again, thank you very much for your kind support.
Summary: This paper introduces OmniBench, a scalable, graph-based benchmark designed to evaluate multimodal large language model (MLLM)-based virtual agents across multiple dimensions. OmniBench employs a bottom-up subtask composition pipeline to generate 36k tasks with controllable complexity across 20 scenarios. The OmniEval framework introduces Coverage Rate (CR) and Logical Consistency (LC) metrics to assess agents beyond simple success rates. Experiments reveal that current agents, including GPT-4o, struggle with graph-structured tasks, highlighting the need for better reasoning and planning. Fine-tuning on OmniBench improves performance, demonstrating its potential for advancing virtual agent capabilities. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. The authors' idea of defining task complexity across five dimensions using five graph properties is impressive. Methods And Evaluation Criteria: Yes, I think the methods and evaluation criteria make sense. The bottom-up task synthesis approach is highly innovative. More importantly, this synthesis method effectively identifies and reutilizes recurring subtasks in trajectory data, making it highly efficient. Theoretical Claims: The task graph proposed by the authors, combined with the bottom-up theoretical claims, is highly intuitive and I consider them to be correct. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. I have reviewed the authors' comparison experiments between OmniEval and human evaluation, the performance of 12 agents on OmniBench, the analysis of task intent's impact on agents, and the comparison between graph-structured and chain-structured tasks. Supplementary Material: The authors provide the implementation code of the evaluation framework in an anonymous repository and include additional details on evaluation and data collection in the appendix of the submission. Relation To Broader Scientific Literature: OmniBench builds on recent advancements in graph-based evaluations for agents. The graph-based evaluator in OmniBench is derived from the relevant design in CRAB (CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents). Additionally, there have been recent works on synthesized trajectories. For example, AgentTrek (AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials) leverages collected tutorials to synthesize over 10,000 high-quality demonstration trajectories. Essential References Not Discussed: I do not find any essential papers that the authors failed to discuss. Other Strengths And Weaknesses: ### Strengths **S1.** The authors propose a novel trajectory synthesis method that significantly alleviates the shortage of agent demonstration trajectories. I believe this bottom-up synthesis approach incorporates a divide-and-conquer strategy, overcoming the challenges of directly synthesizing trajectories in a top-down manner by first synthesizing subtask trajectories and then composing them into task trajectories. Additionally, the synthesized trajectories undergo strict data filtering to ensure high quality. Finally, fine-tuning experiments on OmniBench further validate this approach. **S2.** The authors' Cross-Verification method cleverly integrates subtask trajectory data with evaluation functions. By leveraging mutual verification between these two types of data, it iteratively optimizes both the synthesized trajectories and the evaluation functions simultaneously. This significantly reduces the expert knowledge required to construct evaluation functions for an interactive benchmark. **S3.** The authors design a graph-based evaluator on a DAG and introduce two novel evaluation metrics, Coverage Rate (CR) and Logical Consistency (LC), enabling a more reasonable and fine-grained evaluation of agents. This evaluation approach differs from traditional result-based and trajectory-based evaluations and is better aligned with real-world tasks that involve complex parallel relationships. **S4.** The authors define task complexity across five dimensions using five graph properties, enabling the controlled synthesis of tasks with varying complexity. Additionally, these five complexity dimensions are leveraged to design ten agent capabilities through the composition of them. This approach facilitates a multidimensional evaluation of agents, laying the foundation for comprehensive future advancements. ### Weaknesses I find the ideas in this submission quite intuitive. The bottom-up task synthesis approach and the representation of tasks as graphs are particularly impressive. However, I still have some concerns about this submission: **W1.** Data Quality I am somewhat concerned about the quality of the synthesized data. The authors seem to have conducted training experiments only on OmniBench but have not performed similar experiments on other benchmarks. Although Table 3 presents an ablation study demonstrating the effectiveness of their designed quality control module, I would prefer a more intuitive way to showcase this. For example, the authors could evaluate the performance of agents trained on synthesized trajectories on other benchmarks. This might be more convincing than the results in Table 3 or those obtained solely on OmniBench. **W2.** The Rigor of the Conclusions Figure 7 shows that agents often struggling to handle graph-structured tasks. And the authors mention on lines 362–365 that "most existing agents are predominantly fine-tuned on chain-structured tasks, which may result in their tendency to interpret graph-structured tasks as linear," attributing this to the chain-structured training data. Does this imply that existing agents plan the next action mainly based on the textual order of the task instructions? I believe the authors' conclusion is important and could offer valuable guidance for the future development of agents, but the authors should conduct relevant experiments to support this conclusion. **W3.** Experimental Setup In Table 4, the inputs for MLLMs are all A11Y + Screen, and they perform well under this setting. However, these results cannot be directly compared with those of the agent, as they belong to different experimental settings. I am curious about the performance differences between these MLLMs and specialized agents under a fair comparison on OmniBench. ### Minor Weaknesses I have some minor comments, which might help improve the writing quality. 1. I am interested in the Cross-Verification algorithm in this submission, which applies to optimizing trajectories and evaluation functions iteratively. But I think the details of this algorithm could be explained more clearly. 2. It would be great to highlight more case studies. Although the authors provide quantitative results in Table 4, qualitative analyses of failure cases are also important for understanding the boundary capabilities of agents. 3. The 20 scenarios mentioned in the submission should be specified in more detail. Additionally, the statistics on OmniBench could be expanded, for example, by including statistics on the synthesized data, such as the step distribution of demonstration trajectories. Other Comments Or Suggestions: 1. When formatting the paper, pay attention to spacing. For example, the spacing around Section 4 appears too tight and should be adjusted. 2. For tables, the caption is generally placed above. 3. There is a numbering error in the introduction, located in the left section of line 122. 4. The authors should pay attention to some tense errors. There are several instances in the submission where the past tense is incorrectly used. 5. The subfigure in the bottom right corner of Figure 1 seems to be inconsistent with the experimental results in Table 4. The authors should revise it to ensure consistency. Questions For Authors: **Q1.** The authors don't seem to explain the source of the number of test tasks for each capability in the main figure. How were these test tasks obtained? Additionally, were the final test tasks in OmniBench manually filtered? **Q2.** Are there more examples of input and output resources, and how is their matching ensured so that two subtasks can be seamlessly connected? **Q3.** In Section 3.2, the authors mention composing a set of predefined APIs into a complete evaluation function using a code LLM (e.g., Claude). I'm curious about the specific APIs included and how the evaluation function is constructed. Could you provide an example of how these APIs are combined to form an evaluation function? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank you for professional comments and high appreciation of our work! We are encouraged that our research is recognized as laying the foundation for future advancements. We will address your concerns point by point. **1.Data Quality** Thank you for your suggestion, we conduct additional experiments on OmniAct and AndroidControl benchmarks. We follow the experimental setup in the OS-Atlas paper, using our high-quality graph-based trajectories to train our models: **OS-Atlas-4B** and **UGround-7B-V1**. The results are shown in the following tables. Our models achieve superior performance on both benchmarks compared to the baselines, which showcases the **effectiveness of our graph-based trajectories**. **Table:Evaluation on OmniAct** |Models|Type-Web|Grounding-Web|SR-Web|Type-Desktop|Grounding-Desktop|SR-Desktop| |-|-|-|-|-|-|-| |InternVL2-4B|47.51|51.34|24.39|67.00|44.47|29.80| |Qwen2-VL-7B|89.22|85.94|78.58|96.27|94.52|91.77| |SeeClick|86.98|75.48|68.59|96.79|70.22|72.59| |OS-Atlas-4B|88.56|82.00|73.91|96.51|85.53|84.78| |UGround-7B-V1|90.16|86.98|79.85|97.13|94.79|91.89| |Omni-OS-Atlas-4B(Ours)|89.96|82.74|74.62|97.64|86.37|85.53| |Omni-UGround-7B-V1(Ours)|**91.24**|**87.35**|**80.24**|**97.93**|**95.21**|**92.10**| **Table:Evaluation on AndroidControl** |Models|Type-LL|Grounding-LL|SR-LL|Type-HL|Grounding-HL|SR-HL| |-|-|-|-|-|-|-| |InternVL2-4B|90.94|84.05|80.10|84.09|72.73|66.72| |Qwen2-VL-7B|91.94|86.50|82.56|83.83|77.68|69.72| |SeeClick|93.00|73.42|75.00|82.94|62.87|59.11| |OS-Atlas-4B|91.92|83.76|80.64|84.69|73.79|67.54| |UGround-7B-V1|92.15|87.17|83.29|84.72|78.85|70.31| |Omni-OS-Atlas-4B(Ours)|**92.49**|83.51|81.38|84.86|73.81|67.71| |Omni-UGround-7B-V1(Ours)|92.37|**87.24**|**83.57**|**84.89**|**78.97**|**70.83**| &nbsp; **2.The Rigor of the Conclusions** We appreciate your insightful observations. We define the impact of textual order on the model as its instruction sensitivity, conducting experiments with standard deviation as the metric. We construct 10 specially designed test tasks, each associated with three task instructions that are semantically identical (based on the same task graph) but differ in textual order. As shown in the table below, the original MLLMs tend to be less sensitive to instruction variations, but perform poorly overall. Though fine-tuning them on navigation tasks enhances the performance, it also compromises the models' robustness to instructions, with OS-Atlas-Pro and Aguvis exhibiting significantly higher sensitivity. Moreover, **after incorporating graph-structured task samples from OmniBench into fine-tuning, the models' performance is further improved while largely preserving their robustness**, with Omni-OS-Atlas and Omni-Aguvis exhibiting reduced sensitivity. This indicates that the trajectory data from OmniBench can help models better recognize complex dependency structures embedded in task instructions. |Models(backbone)|Avg. Sensitivity| |-|-| |Human|1.95| |InternVL2-4B|2.97| |Qwen2-VL-7B|2.58| |OS-Atlas-Pro(InternVL2-4B)|9.07| |Aguvis(Qwen2-VL-7B)|12.90| |Omni-OS-Atlas(InternVL2-4B)|3.49| |Omni-Aguvis(Qwen2-VL-7B)|2.67| &nbsp; **3.Experimental Setup** Thank you for your valuable suggestions. Considering MLLMs without grounding-specific training may struggle to perceive fine-grained UI elements in screenshots, we adopt the setup of A11Y+Screen for them. We also conduct the evaluation with only Screenshot setup for fair comparison. Compared to Table4 and the table below, baseline MLLMs consistently achieved poorer performance in Screenshot setup. |Models|PP|LRP|LSR|LIF|SDM|CDDM|SI|DI|CDK|DSK| |-|-|-|-|-|-|-|-|-|-|-| |Qwen2-VL-7B-Instruct|3.2|6.4|2.9|3.5|4.7|3.1|4.5|6.7|5.0|6.8| |InternVL2-8B|3.3|5.9|2.8|3.2|4.9|3.2|4.7|6.2|4.1|5.3| |InternVL2.5-8B|4.9|6.7|4.8|5.7|5.4|6.8|5.2|6.1|5.9|7.2| &nbsp; **4.Minor Weaknesses** For more detailed introduction to Cross-Verification, please refer to Appendix B.2. We present additional case studies in Fig1, specify the 20 scenarios in Fig2, and more statistics on OmniBench in Fig3 of the anonymous link: https://anonymous.4open.science/r/OmniBench_rebuttal-A4C7/r3.md &nbsp; **5.Questions For Authors** **To Q1:** In OmniBench, test tasks collected from the virtual environment are initially selected based on a joint constraint of five complexity dimensions. We also manually filter the tasks to improve quality. **To Q2:** Additional examples of input and output resources are provided in Fig1 via the anonymous link. These resource types, carefully designed by humans, capture the specific states of the virtual environment. Clearly defining these resources for subtasks effectively represents and constrains state transitions in complex virtual environments, facilitating seamless integration between subtasks. **To Q3:** We give examples of how APIs are combined in Fig4 of the anonymous link. &nbsp; We will integrate these experiments and correct the grammatical errors in the next version of paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed explanations in response to my questions. In the rebuttal, the authors provided clarifications regarding data quality, the rigor of the conclusions, and the experimental setup. I think my concerns have been adequately addressed. I consider OminiBench to be a strong contribution. It has the potential to prompt the community to rethink how to more comprehensively assess the true capabilities of virtual agents. I am willing to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your strong and encouraging support of OmniBench! We are truly honored that you find our work meaningful and impactful. It is incredibly motivating to know that our contributions resonate with others in the community. Once again, we would like to express our heartfelt thanks for your valuable feedback and strong support.
Summary: The paper introduces OmniBench, a scalable, graph-based benchmark designed to evaluate multimodal virtual agents by systematically synthesizing diverse tasks of controllable complexity through automatic task composition. It finds that existing agents significantly struggle with graph-structured tasks compared to linear tasks, and that explicitly including task intents notably improves their performance, especially in long-range planning and decision-making scenarios​ Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no Experimental Designs Or Analyses: no Supplementary Material: no Relation To Broader Scientific Literature: benchmark for virtual agent Essential References Not Discussed: no Other Strengths And Weaknesses: see below Other Comments Or Suggestions: 1. The graph-based complexity metrics are superficial, e.g. number of nodes and edges, depth and width. More complex correlations between different tasks should also be considered, e.g. the feature correlation between nodes across graph topology [1], or some other metrics summarized in [2] 2. "By incorporating intents into the prompt, we observed a marked improvement in the agents’ performance on tasks designed to evaluate planning (i.e., Long-Range Planning and Parallel Planning)," The improvement brought by idea of intents looks similar as goal-based reinforcement learning. 3. Could you clarify why are the proposed metrics, Coverage Rate (CR) and Logical Consistency (LC), particularly suited for evaluating graph-structured tasks over more traditional metrics such as trajectory similarity or success rate alone? 4. The benchmark utilizes synthesized tasks within controlled environments. Could you discuss how well the proposed evaluation and results can generalize to less structured, real-world virtual agent applications? 5. Could you provide more details on the robustness of intent extraction process, especially regarding the handling of subtasks whose intents might be overlapped or be ambiguously defined? [1] What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems. [2] The heterophilic graph learning handbook: Benchmarks, models, theoretical analysis, applications and challenges. arXiv preprint arXiv:2407.09618. 2024 Jul 12. Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and insightful comments. We will explain your concerns point by point. **Q1: More graph-based complexity metrics** **A1:** Thank you for the valuable questions. First, we clarify that the current complexity metrics are based on five fundamental graph-based dimensions: dependency, branching, hierarchy, knowledge, and instruction. **Despite their simplicity, these node degree-based metrics are effectively aligned with the definition of subtask capabilities.** Additionally, following your suggestion, we also adopt feature correlation between nodes as the complexity metrics to conduct further evaluation. As the table below shows, both metrics exhibit high Pearson Correlation (ρ) with human evaluations, demonstrating their suitability for accurately reflecting model performance. We will include all these metrics in the revision. |Metrics|ρ(CR, Human)|ρ(LC, Human)| |-|-|-| |degree-based|0.93|0.95 |feature correlation|0.94|0.96| &nbsp; **Q2: Compared to goal-based reinforcement learning** **A2:** Thanks for your insightful observation. Both our **intent-based prompting** and **goal-based reinforcement learning** leverage explicit goal representations to guide agents toward long-horizon objectives, thereby improving planning quality. In our case, the intent serves a role similar to goal conditioning in RL: it narrows the agent’s decision space, aligns actions with the final objective, and encourages completion of multi-step plans. This conceptual alignment highlights the value of goal-driven formulations across both reinforcement learning and prompt-based paradigms. &nbsp; **Q3: Detailed analysis of CR and LC** **A3:** Thanks for your interest in our metric details. **(1) Compared to the trajectory similarity:** The trajectory similarity requires strict alignment with a reference path, while graph-structured tasks may have multiple feasible execution paths due to their inherent branching and parallelism. In contrast, **CR** and **LC** are path-agnostic, making them more robust to graph-structured tasks. **(2) Compared to the success rate:** The success rate only provides binary judgments of task completion and fails to evaluate the intermediate completion of the subtask. In contrast, **CR** and **LC** can fully leverage the intermediate feedback provided by the graph-based evaluator, allowing for fine-grained evaluation of agent behavior. **(3) Further experiments.** We assess Pearson Correlation between human ratings and our metrics as well as traditional metrics on 300 sampled trajectories. The table below shows a **strong alignment** between our metrics and human evaluations, indicating that **CR and LC more faithfully reflect human judgment**. |Metrics|Pearson Correlation| |-|-| |Trajectory Similarity|0.52| |Success Rate|0.60| |CR|0.93| |LC|0.95| &nbsp; **Q4: Generalization on less structured, real-world virtual agent applications** **A4:** Thank you for the valuable question. In fact, we have carefully designed realistic tasks in OmniBench, which enable the proposed evaluation and results to effectively generalize to real-world scenarios. To validate this, we conduct experiments on OmniAct, which collects data from real devices. Compared to Table 4 and the table below, models that performed well on OmniBench also achieve relatively better performance on OmniAct. Furthermore, the trajectories from OmniBench also effectively enhance our models' performance in real-world scenarios. This indicates that **the evaluation conclusions drawn from OmniBench can be generalized to real-world virtual agent applications**. |Models|Type-Web|Grounding-Web|SR-Web|Type-Desktop|Grounding-Desktop|SR-Desktop| |-|-|-|-|-|-|-| |InternVL2-4B|47.51|51.34|24.39|67.00|44.47|29.80| |Qwen2-VL-7B|89.22|85.94|78.58|96.27|94.52|91.77| |SeeClick|86.98|75.48|68.59|96.79|70.22|72.59| |OS-Atlas-4B|88.56|82.00|73.91|96.51|85.53|84.78| |UGround-7B-V1|90.16|86.98|79.85|97.13|94.79|91.89| |Omni-OS-Atlas-4B(Ours)|89.96|82.74|74.62|97.64|86.37|85.53| |Omni-UGround-7B-V1(Ours)|**91.24**|**87.35**|**80.24**|**97.93**|**95.21**| **92.10**| More benchmark results are in Fig1 in anonymous link: https://anonymous.4open.science/r/OmniBench_rebuttal-kvCN/r2.md &nbsp; **Q5: Robustness of intent extraction process** **A5:** Considering the handling of subtasks whose intents might be overlapped or ambiguously defined, we employ an LLM to conduct post-verification processing for the intents. It consists of two steps: First, we instruct the LLM to determine whether the extracted intents are clearly and explicitly expressed. If not, we re-extract the intents. Second, we maintain a pool of existing task intents to avoid overlap. For each newly extracted intent, the LLM assesses whether it overlaps with any existing intents in the pool. If an overlap is detected, the new intent is discarded. Thanks again for your valuable suggestions. We will integrate the above content in next version.
Summary: This paper introduced OmniBench, a graph-based benchmark that addresses the limitations of existing evaluation frameworks by enabling controllable task complexity through automated subtask composition. The paper also proposes OmniEval, a multidimensional evaluation framework for evaluating virtual agents across 10 capabilities. Evaluation results show that training on this data improves agent generalization. Claims And Evidence: Claims made in the submission are well supported by experimental results. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The OmniBench introduces a graph-based benchmark with automated task synthesis, allowing for controllable complexity through subtask composition. This addresses the issue of uncontrollable task complexity in existing benchmarks. The OmniEval framework provides a comprehensive evaluation across 10 capabilities, including subtask-level evaluation and graph-based metrics. This multi-dimensional approach offers deeper insights into agent performance compared to traditional evaluation methods. Theoretical Claims: N/A Experimental Designs Or Analyses: I have checked the experimental designs and analyses. The experiments are comprehensive. The evaluations on various models reveal performance differences and highlight areas for improvement in current virtual agents, providing valuable feedback for future advancements. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. The authors propose an automatic task synthesis method for evaluating virtual agents. This method can significantly improve the diversity and number of evaluation instances and can avoid extensive manual labor. Weaknesses 1. The authors should provide more detailed examples of the synthesized tasks. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback. To better demonstrate the diversity of automatically synthesized tasks in OmniBench, we restate that our approach first explores a range of various subtasks from the explorable environment and then iteratively synthesizes subtask trajectories and evaluation. Finally, the subtasks are composed into diverse tasks bottom-up. Next, we provide more detailed examples of the synthesized tasks. **Example of the OmniBench Synthesized Tasks** First, we present 7 subtasks discovered by the MLLM during environment exploration: |Subtask ID|Instruction Template|Input|Output|Parameter Example|Application| |-|-|-|-|-|-| |A|Create a new PowerPoint file '{path}'|[]|[ppt_path]|{"path": "./Project_Proposal.ppt"}|PowerPoint| |B|Apply the filter '{filter_name}' to the '{path}'|[img_path]|[img_path]|{"filter_name":"Cartoon", "path": "./portrait.png"}|Adobe Photoshop Express| |C|Insert the image '{img_path}' into the PowerPoint file '{ppt_path}'|[ppt_path,img_path]|[ppt_path,img_path]|{"img_path": "./scenery.png", "ppt_path": "./Project_Proposal.ppt"}|PowerPoint| |D|Download the image from the email sent by '{name}' to '{path}'|[]|[img_path]|{"path": "./portrait.png", "name": "Emily"}|Outlook| |E|Send the PowerPoint file '{path}' to '{name}'|[ppt_path]|[ppt_path]|{"path": "./Project_Proposal.ppt", "name": "Emily"}|Outlook| |F|Open the local file '{path}' and copy its contents to the clipboard|[]|[text_in_clipboard]|{"path": "./Emily.txt"}|File Explorer| |G|Paste the text from the clipboard into the title box on the first slide of the PowerPoint file '{path}'|[text_in_clipboard,ppt_path]|[text_in_clipboard,ppt_path]|{"path": "./Project_Proposal.ppt"}|PowerPoint| And we introduce 2 tasks constructed bottom-up from the above subtasks: |Task ID|Instruction|Intent|DAG| |-|-|-|-| |1|Create a new PowerPoint file named `./Project_Proposal.ppt`, save the image from the email sent by Emily to `./portrait.png` and insert it into the presentation. Then copy the content from the local `./Emily.txt` file into the title box on the first slide. Finally send the PowerPoint file back to Emily.|Create a personal introduction PowerPoint for Emily|{"A":["C"],"D":["C"],"F":["G"],"C":["G"],"G":["E"]} |2|Create a new PowerPoint file named `./Project_Proposal.ppt`, insert the image from the email sent by Emily into the presentation, then insert the image with the `Carton` filter applied as well. Finally, send the PowerPoint file back to Emily|Send the comparison PowerPoint to Emily|{"D":["C1"],"A":["C1"],"C1":["C2","B"],"B":["C2"],"C2":["E"]} We also provide examples of corresponding subtask trajectories and the evaluation function. **Example of Subtask Trajectory** ```json { "trajectory_id": "XXX", "instruction": "Using the file explorer, navigate to C:\\Users\\user\\Desktop\\images\\ and new a Text Document named introduction.txt", "observations": [ obs1.png, obs2.png, ... ], "actions": [ { "function": "click_input", "args": { "button": "left", "double": false }, "rect": [ 124, 1020, 179, 1080 ], "description": "There are many application icons on the taskbar, and I need to select the File Explorer to complete the task.", "thought": "To fulfill 'Using the file explorer, navigate to C:\\Users\\user\\Desktop\\images\\ and new a Text Document named introduction.txt', I need to first click the 'File Explorer' button to open the corresponding application.", "control_text": "File Explorer" }, ... ], "subtask_id": "XXX" } ``` **Example of Subtask Evaluation Function** ```python EvalResult = namedtuple('EvalResult', ['success', 'message', 'progress']) def evaluate_agent_task_completion(dir_path: str, file_name: str) -> EvalResult: # Extract the last directory name dir_path = dir_path.rstrip('/\\') folder_name = os.path.basename(dir_path) # Check if navigation to the specified directory was successful if not (check_mouse_clicks(text=folder_name) or check_keyboard_types(text=dir_path)): return EvalResult(False, "Subtask execution fails because agent did not navigate to the specified directory.", 0/2) # Check if the new text document was created file_path = os.path.join(dir_path, file_name) if not check_file_exists(file_path=file_path): return EvalResult(False, f"Subtask execution fails because the file was not created in the directory.", 1/2) # All checks passed, subtask is considered complete return EvalResult(True, "Subtask completed successfully", 2/2) ``` More representative synthesized tasks are visualized in Fig1 of the anonymous link: https://anonymous.4open.science/r/OmniBench_rebuttal-A4C7/r1.md Thank you again for your suggestion. We will integrate these examples into the next version.
null
null
null
null
null
null
Ultra-Resolution Adaptation with Ease
Accept (poster)
Summary: This paper introduces **URAE**, a framework for efficiently adapting text-to-image diffusion models to ultra-high resolutions (e.g., 4K) while minimizing computational costs and data requirements. The approach is based on three key ideas: - **Data Efficiency**: Using synthetic images generated by a teacher model improves convergence. - **Parameter Efficiency**: Fine-tuning the minor singular components of weight matrices is more effective than traditional low-rank adaptation methods like LoRA. - **Classifier-Free Guidance (CFG) Control**: Disabling CFG during adaptation (\( g = 1 \)) improves training consistency. Experiments demonstrate state-of-the-art performance at 2K and 4K resolutions while requiring significantly fewer training samples and iterations. Claims And Evidence: - **Claim**: Ultra-resolution adaptation can be efficiently achieved without large-scale data or full model fine-tuning. - **Evidence**: Empirical results demonstrate that the method achieves competitive performance at 4K resolution with limited data and training iterations. - **Claim**: Synthetic data can significantly accelerate convergence. - **Evidence**: Theoretical support via the bound: \[ E[\|W_T - W^*\|_2^2] \leq E[\|(I - ηM)^T\Delta_0\|_2^2] + \eta^2 (p(1-p)E[\delta^2]+(1-p)\sigma^2)\sum_{i=1}^{N}\frac{(1-(1-\eta\lambda_i)^T)^2}{\lambda_i}+p^2\|W_{ref}-W^*\|_2^2 \] - Experimental validation shows faster convergence with synthetic data augmentation. - **Claim**: Tuning minor singular components of weights outperforms traditional LoRA-based fine-tuning. - **Evidence**: Ablation studies in Table 3 confirm that fine-tuning lower-rank singular values improves performance. - **Claim**: Disabling classifier-free guidance during training is necessary. - **Evidence**: Table 2 shows significant degradation in performance when CFG is enabled during adaptation. Methods And Evaluation Criteria: - **Data Generation**: Uses synthetic images from pre-trained models (e.g., FLUX-1.1) at lower resolutions for training guidance. - **Parameter-Efficient Fine-tuning**: The authors fine-tune the minor singular components of weight matrices instead of major components: \[ W = U\Sigma V, \quad W_{small} = U[:, -r:]\Sigma[-r:, -r:]V[-r:, :]. \] - **Classifier-Free Guidance Control**: CFG is disabled during training and only applied at inference: \[ \epsilon_{\theta}(z_t, t, \emptyset) + g \cdot (\epsilon_{\theta}(z_t, t, y)-\epsilon_{\theta}(z_t,t,\emptyset)) \] **Evaluation Metrics:** - **Quantitative**: FID, LPIPS, MAN-IQA, QualiCLIP, HPSv2.1, PickScore. - **Qualitative**: GPT-4o preference scores. Theoretical Claims: - **Strengths**: - The use of synthetic data to improve convergence is supported by a well-defined mathematical bound (Theorem 2.4). - The choice to tune minor singular components instead of major ones is motivated by SVD properties. - **Weaknesses**: - The theoretical motivation for minor singular component tuning is not rigorously justified beyond empirical observations. - No formal analysis of convergence or stability guarantees of the fine-tuning method. Experimental Designs Or Analyses: - **Strengths**: - Comprehensive ablation studies validate key design choices. - Performance comparisons with state-of-the-art methods support claims of efficiency. - **Weaknesses**: - Experiments are limited to **diffusion models**, leaving uncertainty about applicability to other generative models (GANs, autoregressive models). - The computational efficiency gains are not well-quantified in terms of inference latency and resource consumption. Supplementary Material: Yes, the supplementary material was reviewed. - Additional ablation studies and qualitative results were useful in supporting the claims. - However, some details regarding computational efficiency (e.g., training time comparisons, memory usage) were missing. Relation To Broader Scientific Literature: - The paper builds upon well-established literature in diffusion models and parameter-efficient adaptation, particularly works on LoRA and DreamBooth. - Explicitly contrasts the proposed approach with previous fine-tuning strategies. - Missing discussion on how this method compares with **latent-space adaptation** techniques used in recent diffusion models. Essential References Not Discussed: - The paper thoroughly covers diffusion model fine-tuning literature but could benefit from: - **Comparison with other LoRA modifications** (e.g., tuning minor components in other applications). - **Discussion on alternative ultra-resolution adaptation techniques**, such as patch-based super-resolution models. Other Strengths And Weaknesses: ### Strengths: - The approach is **practical and efficient**, providing clear guidelines for ultra-resolution adaptation. - The **synthetic data augmentation strategy** is well-supported theoretically and experimentally. - Extensive **benchmarking** against diffusion models provides strong empirical validation. ### Weaknesses: - This paper lack of a formal explanation, for why minor singular components are more effective than major ones. The results are compelling but require a more theoretical justification. - Computational overhead: While the method is designed to be efficient, there is **no detailed profiling of inference efficiency** (e.g., time per image, GPU memory consumption). Other Comments Or Suggestions: - Provide a **deeper** theoretical analysis on why minor singular component tuning is optimal for ultra-resolution tasks. - Try to extend experiments to larger datasets (e.g., ImageNet, LAION-HR) to validate scalability. - Analyze the **impact on inference time** to better quantify computational benefits. Questions For Authors: This is a good work but I still have several general questions: 1. **Theoretical Justification:** Can you formally analyze why tuning minor singular components leads to better adaptation performance in ultra-resolution tasks? 2. **Hyperparameter Sensitivity:** How sensitive is the approach to the choice of singular component rank (\( r \))? Would an adaptive selection method improve results? 3. **Applicability to Other Architectures:** Have you considered applying this method to other generative models beyond diffusion models, such as GANs or autoregressive transformers? 4. **Computational Costs:** Given that URAE is designed to be parameter-efficient, have you conducted inference speed and memory consumption benchmarks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply thank Reviewer FjKy for the valuable comments and are glad that the reviewer finds our method practical, efficient, and empirically strong. We would like to address the concerns as below. > 1. Theoretical motivation for minor singular component tuning. * We would like to supplement the following theoretical analysis torwards tuning minor singular component, which will be included in our revision. * Consider the low-rank adapter: $$ Y=XW_0+XAB. $$ The loss is denoted as $L$. Then, the gradients of $L$ w.r.t. $A$ and $B$ are: $$ \frac{\partial L}{\partial A}=X^\top\frac{\partial L}{\partial Y}B^\top $$ and $$ \frac{\partial L}{\partial B}=A^\top X^\top\frac{\partial L}{\partial Y}, $$ respectively. * In vanilla LoRA, $W_0$ is the original weight matrix $W$, while $A$ and $B$ and initialized as random values and zeros respectively. Thus, in the initial adaptation stage, due to the joint influence of $A$'s **random initialization and noise in data**, the gradients of $A$ and $B$ can be highly random, potentially leading to instability. * In our approach of tuning minor components, as shown in Eqs. 5 and 6 of the main manuscript, if $W=U\Sigma V$ derived by SVD, $W_0=U[:,:-r]\Sigma[:-r,:-r]V[:-r,:]$, $A=U[:,-r:]\sqrt{\Sigma[-r:,-r:]}$, and $B=\sqrt{\Sigma[-r:,-r:]V[-r:,:]}$ initially. Consequently, the gradients of $A$ and $B$ are influenced by the minor components of the original weight matrix $W$, which tend to be **numerically small and more stable** compared to standard LoRA. For ultra-resolution adaptation, where major semantics and appearances remain unchanged, tuning minor components helps preserve knowledge in $W$ by **effectively regulating the gradients** of $A$ and $B$. > 2. No formal analysis of convergence or stability guarantees of the fine-tuning method. * We conduct the following theoretical analysis on the upper bound of the distance between the solution after $T$ iterations $A_TB_T$ and the optimal $W^*$: $$ \Vert A_TB_T-W^*\Vert^2_F\leq(1-\eta\mu)^T\Vert A_0B_0-U[:,:r]\Sigma[:r,:r] V[:,:r]\Vert^2_F+\sum_{i=r+1}^c\Sigma[i,i]^2, $$ where $W^*=U\Sigma V$ with SVD, and $\mu$ is the smallest non-zero eigenvalue of the Hessian matrix. * The above bound indicates that the training converges to its theoretical optimal solution in a linear rate. We will include the proof in the revision. > 3. Applicability to other models. * Thanks. We conduct experiments on Infinity, a text-to-image visual autoregressive model, to adapt it from 1K to 2K scale. The following results confirm the applicability to various models. ||QualiCLIP$\uparrow$|MAN-IQA$\uparrow$|HPSv2.1$\uparrow$| |-|-|-|-| |Infinity-8B|0.5233|0.3226|32.26| |w/ URAE|**0.5570**|**0.3584**|**32.35**| > 4. Quantified computational efficiency. * In fact, the parameter efficiency here refers to training efficiency as it only requires to tune a small amount of parameters. As indicated in Sec. 4, this work does not focus on inference efficiency since it is orthogonal to our main contributions. The inference cost is the same as the original FLUX operating at the corresponding resolutions. On H100: ||2K|4K| |-|-|-| |Inference Time (28 Steps/Image)|36.5 Sec.|330.4 Sec.| |GPU Memory|27.5 GB|39.5 GB| * For the quantified analysis of data efficiency and parameter efficiency, we kindly refer the reviewer to *our response to Q1 of Reviewer u5sm*. > 5. Tuning minor components in other applications. * We note that (Wang et al. 2024a) focus on LLM fine-tuning and also applies minor-component tuning. However, there lacks critical analysis on the applicability of this approach across various scenarios. In contrast, we demonstrate that the method can improve performance when (1) data contains significant noise and (2) the target distribution does not shift too much from the source, *e.g.*, 4K generation. In other cases, when clean data are available, we find that vanilla LoRA can be more effective, as shown in Tab. 2. * Our response to Q1 provides theoretical insights on this. Due to small singular values, the gradients w.r.t. $A$ and $B$, are numerically small, which may lead to insufficient adaptation when training data are accurate. > 6. Discussion on alternative ultra-resolution adaptation techniques. * We show comparisons and integrations with some works in Tab. 1 and Fig. 5 and include some discussions in Sec. A.2. * We kindly refer the reviewer to *our response to Q1 of Reviewer wYHL* for more discussions. > 7. Extend experiments to larger datasets (e.g., ImageNet, LAION-HR). * In fact, as shown in Line 263 (right), our 4K-generation model is already trained with LAION-HR data. > 8. Sensitivity to the choice of singular component rank ($r$). * We conduct the following studies to analyze the sensitivity to the rank $r$: |Rank|1|4|16 (Default)|64|256| |-|-|-|-|-|-| |ImageReward$\uparrow$|0.9291|1.0150|1.0923|0.9442|0.9239| Overall, the performance remains stable when $r$ is around 16. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough rebuttal and the additional theoretical and experimental clarifications on minor singular component tuning. The convergence analysis, Infinity model experiments, and expanded insights on computational efficiency and parameter usage all help solidify the practicality and applicability of URAE for ultra-resolution diffusion models. These details significantly strengthen the paper's overall contribution. --- Reply to Comment 1.1.1: Comment: We would like to sincerely thank Reviewer FjKy for acknowledging our response and for the encouraging positive feedback. Following the suggestions, we will include these results in our revision. We truly appreciate the reviewer's constructive input to our manuscript.
Summary: The paper "Ultra-Resolution Adaptation with Ease" presents a novel approach called URAE for adapting text-to-image diffusion models to generate ultra-high-resolution images (e.g., 4K) with limited training data and computational resources. The key contributions include: 1. Theoretical and empirical evidence showing that synthetic data from teacher models can significantly enhance training convergence. 2. A parameter-efficient fine-tuning strategy that tunes minor components of weight matrices, outperforming widely-used low-rank adapters when synthetic data is unavailable. 3. The importance of disabling classifier-free guidance during adaptation for models leveraging guidance distillation. 4. Extensive experiments demonstrating that URAE achieves performance comparable to state-of-the-art closed-source models like FLUX1.1 [Pro] Ultra with only 3K samples and 2K iterations for 2K generation, while setting new benchmarks for 4K-resolution generation. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The authors provide theoretical analysis (Theorem 2.4) to demonstrate the potential benefits of using synthetic data for training convergence. They also conduct extensive experiments to validate the effectiveness of their proposed methods, including ablation studies on key components (data source, parameter tuning strategy, classifier-free guidance). The results show significant improvements over baseline methods and state-of-the-art models in both quantitative metrics and qualitative visual comparisons. The claims about the effectiveness of tuning minor components when synthetic data is unavailable are well-supported by experimental results in the 4K generation task. Methods And Evaluation Criteria: The proposed methods make sense for the problem of ultra-resolution adaptation. The approach of using synthetic data from teacher models addresses the challenge of limited high-quality training data for ultra-resolution images. The parameter-efficient fine-tuning strategy that focuses on minor components of weight matrices is innovative and appropriate for scenarios where synthetic data is unavailable. The evaluation criteria, including FID, LPIPS, MAN-IQA, QualiCLIP, HPSv2.1, and PickScore, are standard and relevant for assessing image generation quality. The use of GPT-4o for AI preference studies adds a novel dimension to the evaluation, providing insights into human-like preferences for generated images. Theoretical Claims: The theoretical claims are correct. The authors provide a detailed proof (Theorem B.1) for their main theoretical result regarding the error bound when training with a mixture of real and synthetic data. The proof follows standard optimization analysis for neural networks and correctly accounts for the impact of label noise and model discrepancies. The assumptions made (infinite-width neural networks, linear approximation) are standard in theoretical analyses of neural network training. Experimental Designs Or Analyses: The experimental designs are sound and valid. The authors conduct experiments on both 2K and 4K resolution tasks, comparing against multiple baseline methods and state-of-the-art models. The ablation studies effectively isolate the impact of different components of their approach. The user study for 4K generation provides additional validation of the practical effectiveness of their method. The experimental setup, including training details and implementation specifics, is well-documented and allows for reproducibility. Supplementary Material: I reviewed the supplementary material, including the theoretical proof in Appendix B and additional experimental details in Appendix C and D. The theoretical proof is thorough and correctly supports the main claims. The additional experimental results provide further validation of the method's effectiveness across different evaluation dimensions and qualitative examples. Relation To Broader Scientific Literature: The key contributions of this paper are well-situated within the broader scientific literature on text-to-image generation and diffusion models. The work builds upon recent advances in diffusion models, parameter-efficient fine-tuning, and high-resolution image generation. It addresses the practical challenge of adapting existing models to ultra-resolution settings with limited resources, which is a significant concern in the field. Essential References Not Discussed: The paper cites relevant prior work in text-to-image diffusion models, high-resolution generation, and parameter-efficient fine-tuning. However, it could benefit from discussing more recent works on high-resolution generation[1,2,3,4], especially training-free ones. [1] Jin, Zhiyu, et al. "Training-free diffusion model adaptation for variable-sized text-to-image synthesis." Advances in Neural Information Processing Systems 36 (2023): 70847-70860. [2]Cao, Boyuan, et al. "Ap-ldm: Attentive and progressive latent diffusion model for training-free high-resolution image generation." arXiv preprint arXiv:2410.06055 (2024). [3] Qiu, Haonan, et al. "Freescale: Unleashing the resolution of diffusion models via tuning-free scale fusion." arXiv preprint arXiv:2412.09626 (2024). [4] Kim, Younghyun, et al. "Diffusehigh: Training-free progressive high-resolution image synthesis through structure guidance." arXiv preprint arXiv:2406.18459 (2024). Other Strengths And Weaknesses: Strengths: • The paper addresses a significant practical problem in the field of text-to-image generation. • The proposed URAE framework is comprehensive, addressing both data and parameter efficiency. • The theoretical analysis provides valuable insights into the effectiveness of synthetic data. • The experimental validation is extensive and rigorous. Weaknesses: • The paper could benefit from more detailed comparisons with very recent works on efficient high-resolution generation. For example, those methods mentioned in the “Essential References Not Discussed” section • The computational efficiency during inference is not specifically optimized, which could be a limitation for real-time applications. Other Comments Or Suggestions: The paper is well-written and well-structured, making it accessible to both experts and those new to the field. The visualizations of results are clear and effectively demonstrate the quality improvements achieved by URAE. Questions For Authors: 1. How would the performance of URAE scale with additional training data beyond the 3K samples used in the experiments? 2. What specific architectural modifications would be needed to combine URAE with recent efficient diffusion backbone designs (linear attention, SSM)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer wYHL for the positive feedback on the manuscript and are very excited that the reviewer mentions the strengths of addressing a significantly practical problem with a comprehensive framework, insightful theoretical analysis, extensive experiments, and well-written manuscript. The questions are addressed below. > 1. The paper could benefit from more detailed comparisons with very recent works on efficient high-resolution generation. For example, those methods mentioned in the “Essential References Not Discussed” section. * Thanks for bringing these related works to our attention. We show comparisons and integrations with some training-free works in Tab. 1 and Fig. 5. For works mentioned by the reviewer, we would like to supplement the comparison results using consistent COCO validation prompts here: ||FID$\downarrow$|HPSv2.1$\uparrow$|ImageReward$\uparrow$|PickScore$\uparrow$| |--|--|--|--|--| |AP-LDM[2]|48.50|30.40|0.6874|22.80| |FreeScale[3]|48.87|31.19|0.7494|22.66| |DiffuseHigh[4]|49.02|30.16|0.6182|22.77| |URAE(Ours)|**38.85**|**31.50**|**1.0923**|**23.21**| * We would like to include the following discussions on these reference to the revision: 1. [1] proposes a resolution-adaptive attention scale factor, which has **already been adopted** in a series of works including FLUX-1.dev$^*$ and I-Max in Tab. 1. 2. [2] proposes an attention-guidance scheme and a progressive upsampling strategy. 3. [3] adopts a global-local self-attention mechanism and a tailored self-cascade upscaling strategy with region-aware detail control. 4. [4] proposes a DWT-based structural guidance to guide the high-resolution generation with the structural information of the low-resolution images. These works mentioned by the reviewer tackle the problem of high-resolution image generation from **training-free** perspectives by designing effective stragies, *e.g.*, **progressive generation**, to leverage pre-trained diffusion models at their native scales, wheras our method focuses on adapting these models from a **training-based** perspective so that they can **directly** operate at a high-resolution scale. Therefore, as mentioned in Sec. A.2 of the appendix, the two lines of research work address the problem from orthogonal directions, *i.e.*, strategy v.s. model, and can be readily integrated together for better performance, as shown in Tab. 1 and Fig. 5. > 2. The computational efficiency during inference is not specifically optimized, which could be a limitation for real-time applications. * Thanks for pointing this out. Although this work does not specifically optimize inference latency, we would like to share our latest observation that, even without any additional training, a trained adapter on FLUX.1-dev can be migrated onto FLUX.1-schnell, which can generate high-quality results with only 4 denoising steps and achieves $6\times$ acceleration compared with FLUX.1-dev (25.8 v.s. 36.5 sec./image). The performance under this setting is shown below: ||FID$\downarrow$|HPSv2.1$\uparrow$|ImageReward$\uparrow$|PickScore$\uparrow$| |--|--|--|--|--| |FLUX-schnell|42.42|27.97|0.6902|22.07| |FLUX-schnell*|42.20|28.17|0.7446|22.38| |w/ URAE|**38.66**|**29.63**|**0.9999**|**22.74**| We will include these results in our revision, which suggest significant potential for acceleration. > 3. How would the performance of URAE scale with additional training data beyond the 3K samples used in the experiments? * Thanks for the insightful question. We are actively collecting more data from FLUX1.1 [Pro] Ultra and training new models. As scaling up data collection, preprocessing, and training requires significant resources, the experiments are still ongoing, and we will include the results in our revision. > 4. What specific architectural modifications would be needed to combine URAE with recent efficient diffusion backbone designs (linear attention, SSM)? * Thanks for the valuable question. We are continuing working on improving the architectural efficiency of the proposed URAE. Our latest exploration suggests the feasibility of replacing the original full attention with the **linearized attention** structure introduced in (Liu et al., 2024a). We find that even without further adaptation, the trained adapters in URAE are compatible with these novel attention layers. We present some examples via [this anonymous link](https://anonymous2024.s3.ap-southeast-1.amazonaws.com/data/linear.pdf). The models with linearized attention achieve $1.4\times$ acceleration at 2K resolution (25.8 v.s. 36.5 sec./image) and $2.7\times$ acceleration at 4K resolution (124.2 v.s. 330.4 sec./image) We would like to thank Reviewer wYHL again for the in-depth reviews. We would definitely love to further interact with the reviewer if there are any further questions.
Summary: This paper tackles the challenge of efficiently adapting text-to-image diffusion models to ultra-high resolutions (2K and 4K). Traditional approaches demand massive amounts of 4K training data and expensive fine-tuning of the entire model, making them difficult to deploy at scale. In contrast, URAE explores two main dimensions—data efficiency and parameter efficiency—and provides guidelines that yield strong ultra-resolution results with only thousands of samples and minimal GPU resources. By combining synthetic teacher-generated data (when available) and targeted parameter-efficient fine-tuning, URAE achieves state-of-the-art 2K image quality comparable to closed-source models such as FLUX1.1. It also sets new benchmarks in 4K resolution, demonstrating its adaptability under data-scarce conditions. Claims And Evidence: 1. **Claim: URAE Achieves Ultra-Resolution Adaptation with Minimal Data** - **Evidence**: The authors fine-tune a base diffusion model (FLUX.1-dev) on just **3K synthetic samples for 2K** tasks and achieve close or better results than advanced closed-source models. The theoretical analysis (Theorem 2.4) shows how synthetic data from a high-quality teacher can expedite training convergence. 2. **Claim: Parameter-Efficient Fine-Tuning Is More Effective Than Full Model Tuning** - **Evidence**: Through ablation, they show that focusing on particular “minor” or “major” singular values outperforms commonly used LoRA in certain scenarios, especially for 4K adaptation when synthetic data is unavailable. Empirical benchmarks in Tables 1–3 confirm superior performance over baseline or naive approaches. 3. **Claim: Disabling Classifier-Free Guidance (CFG) During Training Improves Stability** - **Evidence**: The authors discover that for guidance-distilled models like FLUX, setting the CFG scale to 1 (effectively “off”) during fine-tuning leads to better adaptation performance. Results in Table 2 and Figures 3 & 7 illustrate the negative impact of leaving CFG on during adaptation. 4. **Claim: Compatibility with Training-Free High-Resolution Pipelines** - **Evidence**: URAE can be employed in conjunction with existing post-processing or upscale pipelines (e.g., SDEdit, I-Max). Figure 5 shows that URAE effectively upgrades their output from 1024×1024 to 2048×2048, surpassing conventional super-resolution baselines like Real-ESRGAN and SinSR. Methods And Evaluation Criteria: - **Methods**: - URAE advocates fine-tuning on high-quality synthetic data generated by a teacher model. - At 2K resolution (with synthetic data), focusing on major components (LoRA) works well. At 4K resolution (less reliable data), tuning minor singular values preserves the model’s essential capacities and avoids overfitting to noise. - For models that rely on guidance distillation, turning off CFG (g=1) eliminates mismatched training objectives. - **Evaluation**: - Datasets & Benchmarks: HPD, DPG, LAION-5B for real data, and teacher-synthesized data from FLUX1.1 [Pro] Ultra for synthetic data. - Metrics: FID, LPIPS, MAN-IQA, QualiCLIP, user preference metrics (HPSv2.1, PickScore), and GPT-4-based AI preference scores. - Baselines: Includes stable or reference models like PixArt-Sigma, Sana-1.6B, Real-ESRGAN, SinSR, FLUX-1.dev, etc. Overall, comprehensive quantitative and qualitative comparisons highlight URAE’s effectiveness. Theoretical Claims: The authors introduce a linearized neural tangent kernel perspective (Theorem 2.4) to show that mixing real and synthetic data can accelerate learning, provided the synthetic data come from a sufficiently good teacher. This analysis solidifies the data-efficiency claim, as it mathematically bounds the distance to the optimal solution under varying real/synthetic data proportions. Experimental Designs Or Analyses: - **Extensive Benchmarks**: 1. **2K Results**: Detailed in Table 1 and Fig. 4–6, showing URAE outperforms baseline or SOTA methods in image fidelity and preference tests. 2. **4K Results**: Evaluated in Table 3 and Fig. 8, highlighting that “minor” component tuning without synthetic data can still yield strong high-resolution outputs. 3. **Ablation Studies**: Table 2 and Fig. 7 analyze the effect of (i) synthetic vs. real data, (ii) tuning major vs. minor components, and (iii) CFG on or off. - **User Studies & AI-Assisted Scoring**: Incorporating GPT-4 preference evaluation (Fig. 4, Table 4) yields additional insights into alignment, aesthetics, and overall image quality. Supplementary Material: - **Appendices** offer: 1. Detailed theoretical proofs (Appendix B) for Theorem 2.4. 2. Additional ablations, hyperparameter details, and user study prompts (Appendix C–D). 3. More visual examples of high-resolution outputs, reinforce URAE’s texture fidelity advantages. The code is mentioned to be released publicly in the future. Relation To Broader Scientific Literature: - URAE aligns with growing research on large diffusion transformers, e.g., FLUX, PixArt, SANA, focusing primarily on data-efficiency, rather than training entire massive backbones. - The approach extends beyond standard LoRA, referencing PISSA, FedPara, and other recent minor-component methods. - Although mainly tested on image generation, the method could integrate well with broader multimodal large language models. Essential References Not Discussed: Yang, Zhuoyi, et al. "Inf-dit: Upsampling any-resolution image with memory-efficient diffusion transformer." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024. Other Strengths And Weaknesses: **Strengths**: - **Practical Data Efficiency**: Demonstrates that 3K–30K images are enough to scale from 2K to 4K resolution, far below prior 4K training demands. - **Detailed Ablation**: Comprehensive analysis of synthetic vs. real data usage, plus major/minor SVD component choices. - **Strong Empirical Evidence**: Includes GPT-4 preference ranking, user studies, and well-known objective metrics. **Weaknesses**: - **Limited Real-World Cost Analysis**: While fewer iterations are praised (2K–10K), a clearer breakdown of training time, memory usage, or energy consumption would better illustrate URAE’s resource savings. - **Focus on DiT-Style Models**: The method’s adaptability to other architectures (e.g., UNet-based) is suggested but not deeply tested. - **Inference Efficiency**: The paper admits it does not optimize for inference latency, which might matter for large-scale industrial use. Other Comments Or Suggestions: In Figure 7, it’s not immediately clear how using synthetic data differs from using real data. Could you explain why the visual differences in Figure 7 appear subtle, and what evidence in the paper supports the conclusion that synthetic data ultimately improves training and performance? Questions For Authors: 1. **Computational Footprint**: Could you provide more details on training cost comparisons, e.g., GPU hours, memory usage, or speedups over full fine-tuning? 2. **UNet-Based Models**: Does URAE also apply neatly to stable diffusion–type backbones, or are modifications needed? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer u5sm for the constructive comments. We are happy that the reviewer finds our data efficiency practical, ablation detailed, and empirical evidence strong. We would like to address the concerns and questions reflected in the review below. > 1. Limited Real-World Cost Analysis: While fewer iterations are praised (2K–10K), a clearer breakdown of training time, memory usage, or energy consumption would better illustrate URAE’s resource savings. * We would like to sincerely thank the reviewer for the constructive suggestions. Following the suggestions, according to publicly available information, we summarize "# of Training Iteration × Batch Size" of various methods, which reflects the total number of seen samples during training, to present a clearer breakdown of the required resources: ||PixArt-Sigma-XL|Sana-1.6B|Ours| |-|-|-|-| |# of Training Iteration × Batch Size|64K|≥320K|16K| Since different methods use varying base models and hardwares for training, we exclude training time as a direct indicator of resource savings. Nevertheless, even for FLUX—the largest open-source diffusion model with 12B parameters—our 4K model can still be trained within a day on an 8×H100 server. * For memory usage, we conduct the following studies on training-time GPU memory requirement (MB) with respect to various ranks of the adapters: |Rank|1|4|16 (Default)|64|256|1536|3072 (Full)| |-|-|-|-|-|-|-|-| |2K|35916|35958|36124|36816|39884|52102|77880| |4K|62806|62850|63010|63704|66114|80332|OOM| We observe that comparing with full-rank adaptation, the low-rank adapters save GPU memory by 50%+. > 2. Focus on DiT-Style Models: The method’s adaptability to other architectures (e.g., UNet-based) is suggested but not deeply tested. * Thanks for the constructive suggestion. Following the suggestion, we conduct an experiments on SD-1.5, to adapt it from 512 to 1024 resolution. The synthetic data used are 10K samples generated by SD3. Results are shown below: ||FID$\downarrow$|HPSv2.1$\uparrow$|PickScore$\uparrow$| |--|--|--|--| |SD 1.5|47.55|23.66|20.69| |SD 1.5*|45.15|23.72|20.71| |SD 1.5 w/ [a]|43.07|24.36|21.32| |SD 1.5 w/ Ours|**31.06**|**28.93**|**21.98**| The FID is computed against 5K images in COCO2014val following [b]. SD1.5* denotes using the porpotional attention strategy similar to FLUX-1.dev* in Tab. 1. [a] is a state-of-the-art training-free high-resolution generation baseline based on resolution-aware downsampling and upsampling. The results verify the adaptability of our method to UNet-based diffusion models and its superior high-resolution generation capacity. > 3. Inference Efficiency: The paper admits it does not optimize for inference latency, which might matter for large-scale industrial use. * Thanks for pointing this out. Although this work does not specifically optimize inference latency, we would like to share our latest observation that, without any additional training, a trained adapter on FLUX.1-dev can be migrated onto FLUX.1-schnell, which can generate high-quality results with only 4 denoising steps and achieves $6\times$ acceleration compared with FLUX.1-dev (25.8 v.s. 36.5 sec./image). The performance under this setting is shown below: ||FID$\downarrow$|HPSv2.1$\uparrow$|ImageReward$\uparrow$|PickScore$\uparrow$| |--|--|--|--|--| |FLUX-schnell|42.42|27.97|0.6902|22.07| |FLUX-schnell*|42.20|28.17|0.7446|22.38| |w/ URAE|**38.66**|**29.63**|**0.9999**|**22.74**| We will include these results in our revision, which suggest significant potential for acceleration. > 4. In Figure 7, it’s not immediately clear how using synthetic data differs from using real data. Could you explain why the visual differences in Figure 7 appear subtle, and what evidence in the paper supports the conclusion that synthetic data ultimately improves training and performance? * Thanks for the good question. In fact, Fig. 7 in the manuscript empirically verfies the theoretical result in Theorem 2.4 that synthetic data improve performance by diminishing label noises. Comparing results from synthetic and real data, we observe that the latter introduces many unrelated petals, whereas the former exhibits a cleaner layout. Additionally, the synthetic data produce a brighter, more vivid color tone and sharper contours with higher saturation. * Furthermore, the results in Tab. 2 quantitatively demonstrate the superiority of synthetic data. We would like to thank Reviewer u5sm again for the valuable feedback. Hope our responses alleviate the reviewer's concerns and we are happy to answer additional questions if there are. *** [a] HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models, Zhang et al., ECCV 2024 [b] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models, Li et al., CVPR 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the comprehensive answers. My concerns are fully addressed. --- Reply to Comment 1.1.1: Comment: We are more than glad to know that our responses have fully resolved the raised concerns. We deeply value the reviewer’s insightful comments and constructive suggestions, which will be reflected in our revision and have significantly contributed to refining our manuscript. We are truly grateful for Reviewer u5sm’s time, effort, and thoughtful engagement throughout this process.
Summary: This paper explores the adaptation of existing models to ultra-resolution image generation. The authors categorize the challenges into two key aspects: data efficiency and parameter efficiency. Regarding data efficiency, the authors argue that synthetic data can serve as a valuable resource for model convergence in data-scarce scenarios. Regarding parameter efficiency, the proposed approach focuses on tuning minor components when adapting existing models to ultra-resolution. This method offers a promising direction for expanding existing models to ultra-resolution image generation by leveraging a small set of synthetic data for efficient adaptation. Claims And Evidence: It is somewhat unclear whether synthetic data generated by teacher models can theoretically promote training convergence significantly, as the authors provide only empirical evidence without a formal theoretical justification. In Section 2.2, synthetic data would be beneficial only if the reference model generating these data is highly accurate. However, it is not guaranteed that this approach avoids mode collapse. From visual inspection, the generated images appear to exhibit **highly similar patterns**, suggesting possible mode collapse. For instance: In Figure 8 (URAE Minor-4K), the second image contains many repetitive flower patterns, whereas PixArt-Sigma-XL generates more diverse floral structures. Similarly, in the giraffe example, the URAE-generated image displays repetitive mountain patterns, whereas PixArt-Sigma-XL and Sana-1.6B show greater variation. Such **repetitive patterns** are also widely noticeable in Figure 1, further supporting this concern. Additionally, it is unclear how closely FLUX-1.1 [Pro] Ultra resembles real data. The authors appear to assume FLUX-1.1 [Pro] Ultra as real and measure FID scores relative to it in Table 1, yet it is still synthetically generated data. While the paper argues that existing 2K or 4K resolution benchmarks do not exist, an alternative approach could be to **reduce the resolution for quantitative evaluation**. For example, adapting a 512-resolution model to generate 1024-resolution images could provide meaningful comparative insights. Methods And Evaluation Criteria: See the **Claims And Evidence** part. Theoretical Claims: It is commendable that the paper includes a theoretical proof in Section 2; however, the derivation does not clearly support the results claimed in the paper. Please refer to the "Claims and Evidence" section for further clarification. Experimental Designs Or Analyses: It appears that MAN-IQA and QualiCLIP may not be reliable metrics for evaluating 4K resolution, as their rankings differ significantly from user study results. For instance, while FLUX-1.dev* ranks second in MAN-IQA, it exhibits noticeable artifacts in Figure 8, raising concerns about the alignment between automated metrics and perceptual quality. Additionally, could the authors clarify how LPIPS is measured? Specifically, which images are used as the source and which as the target in the comparison? Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper is adapting existing models to a data-scarce domain by leveraging a smaller set of synthetic data. This approach could be highly beneficial for various domains where obtaining high-dimensional data is significantly more challenging than acquiring low-dimensional data. Essential References Not Discussed: The paper provides a well-discussed review of existing works, effectively situating its contributions within the broader research landscape. Other Strengths And Weaknesses: **Strengths** - The paper addresses the important problem of adapting existing models for ultra-resolution image synthesis. - The writing is well-structured and easy to follow, making the paper accessible to readers. - Exploring this problem from multiple perspectives (e.g., data, parameters) provides valuable insights and contributes to a broader understanding of the challenges involved. **Weaknesses**: Please address the concerns raised in the "Claims and Evidence" and "Experimental Designs or Analyses" sections. Other Comments Or Suggestions: Given the existence of numerous tuning-free approaches for expanding models to high-resolution image synthesis, including ultra-resolution adaptation, it is difficult to claim that this paper is the first to tackle adaptation as a primary contribution. Questions For Authors: How can this approach be integrated with existing tuning-free high-dimensional image generation methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate Reviewer 5Gfa's thoughtful comments and are glad that the significance and insights of our work are recognized. We would like to address the concerns as below. > 1. Theoretical analysis on synthetic data and mode collapse. * Theorem 2.4 illustrates that, **by diminishing label noise, accurate synthetic data achieve lower error than real data**, which theoretically supports the effectiveness. * **For mode collapse, we theoretically verify that the difference on the diversity of generated samples between the trained and optimal models are tightly bounded, highlighting its robustness against this issue.** Specifically, assume the input data $u\sim\mathcal{N}(0;I)$. The distance between the variance of generated samples by models after $T$ iterations and the optimal one satisfies: $$ \mathbb{E}[\vert Var(f(u;W_T))-Var(f(u;W^*))\vert]\leq2\Vert W^*\Vert_2\sqrt{d}+d, $$ where the settings follow Theorem 2.4 and $d$ is the r.h.s. of Eq. 3, concerning with the accuracy of synthetic data. We will include the proof in the revision. > 2. Visual inspection and mode collapse. * Mode collapse refers to a lack of diversity in **various generative samples**, which is, in fact, not equivalent to similar patterns **within an image**. According to its definition, we do not encounter this issue as validated by the FID against 2K real images in COCO2014val below: |2K|FID$\downarrow$|LPIPS$\downarrow$|4K|FID$\downarrow$|LPIPS$\downarrow$| |-|-|-|-|-|-| |PixArt-Sigma-XL|57.02|0.5075|PixArt-Sigma-XL|75.81|0.5066| |Sana-1.6B|54.57|0.5122|Sana-1.6B|73.46|0.5108| |Ours |**52.95**|**0.4669**|Ours|**70.44**|**0.4647**| |FLUX1.1 [Pro] Ultra|47.12|0.4518|FLUX1.1[Pro]Ultra|-|-| * Possibly caused by the powerful spatial attention, FLUX itself tends to yield similar pattens, which are also reflected in Fig. 1 and Fig. 14 of I-Max (Du et al., 2024b) and can be inherited by models based on it. * Empirically, we observe that when objects or patterns are explicitly specified in prompts, the results tend to follow the prompts rather than exhibiting similarity. We validate this through GenEval scores below, which assess precisions of position, instance appearance, etc. ||PixArt-Sigma-XL|Sana-1.6B|Ours| |-|-|-|-| |GenEval Score|0.5422|0.6892|**0.6913**| * Sincerely hope our responses can alleviate this concern and we will further clarify it with visualizations in our revision. > 3. It's unclear how closely FLUX-1.1 [Pro] Ultra resembles real data. * As FLUX1.1 [Pro] Ultra ranks top on multiple text-to-image leaderboards and our goal is to achieve on-par performance with it, we adopt its generated images as targets in Tab. 1. * We also supplement results computed against real images in COCO2014val. Please refer to *our response to Q2* for details. > 4. Reduce the resolution for quantitative evaluation. * Thanks for the suggestion. We evaluate our URAE on SD1.5 and adapt it from 512 to 1024 scale. The training data are generated by SD3. We kindly refer the reviewer to *our response to Q2 of Reviewer u5sm* for the results, which demonstrate that URAE achieve superior high-resolution generation capacity. > 5. MAN-IQA and QualiCLIP may not be reliable metrics for evaluating 4K resolution. * In fact, various metrics have varying preferences and biases, so we include diverse metrics to demonstrate the superiority of our method across various aspects. By downsampling the generated 4K images to the required resolution for evaluation, we supplement more metrics here to reinforce the conclusion: |4K|HPSv2.1|ImageReward|PickScore|GPT-4o Aesthetic|GPT-4o Prompt Alignment|GPT-4o Overall| |-|-|-|-|-|-|-| |PixArt-Sigma-XL|31.02|0.9342|22.76|87.66|87.00|86.28| |Sana-1.6B|32.00|1.0886|22.86|87.71|89.94|86.83| |Ours|**32.85**|**1.1484**|**23.38**|**89.65**|**90.50**|**87.58**| * GPT-4o scores for human-like evaluation are also included. |Win Rate|Aesthetic|Prompt Alignment|Overall| |-|-|-|-| |v.s. pixart|67.90%|61.60%|59.50%| |v.s. sana|66.60%|50.40%|57.30%| > 6. How LPIPS is measured? * In Tab. 1, similar to FID, images generated by FLUX1.1 [Pro] Ultra are used as the target for LPIPS, while the sources are images generated by various methods, following [a]. In *our response to Q2*, we also supplement the LPIPS results computed against real images. > 7. Relationship with tuning-free high-dimensional image generation methods. * The "ultra-resolution adaptation" in the manuscript refers to the training-based adaptation. We will explicitly clarify this in our revision. * The relationships are discussed in Sec. A.2 of the appendix, where we mention that the two lines of research tackle the problem from two orthogonal perspectives: model and pipeline. * As shown in Line 308 (left), we apply the trained adapters by our method to these training-free solution in the high-resolution stage. Results can be found in Tab. 1 and Fig. 5. *** [a] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models, Li et al., CVPR 2024 --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed answers. The additional experiments and evaluations address my concerns, and I am happy to raise my scores. --- Reply to Comment 1.1.1: Comment: We are truly grateful for the reviewer's thoughtful and constructive feedback, which has been instrumental in improving our work. We are more than encouraged to hear that the reviewer's concerns have been addressed. Thanks again for the reviewer's time and valuable input throughout the review process :)
null
null
null
null
null
null
Efficient Skill Discovery via Regret-Aware Optimization
Accept (poster)
Summary: This paper proposes regret-aware skill discovery (RSD) for unsupervised skill discovery. RSD is built upon METRA, a previous temporal distance-based skill discovery method. The key idea behind RSD is to use a separate, learned skill sampler policy to sample $z$'s for better exploration (unlike the uniform distribution in METRA). Specifically, they train two policies in an adversarial manner: an action policy $\pi_{\theta_1}$ and a skill sampler policy $\pi_{\theta_2}$. $\pi_{\theta_1}$ maximizes an objective similar to METRA (but with slight modifications to make it compatible with bounded $\phi$). $\pi_{\theta_2}$ maximizes the regret (defined as $V_k - V_{k-1}$) of the action policy to guide the agent into a region where there's room for improvement in skill learning. The authors experimentally show that RSD leads to better state coverage and downstream performance in Ant, Maze2d, and AntMaze domains. Claims And Evidence: Their claims made in the abstract and introduction are generally well-supported by empirical evidence. Methods And Evaluation Criteria: Their approach is sensible and the evaluation criteria look reasonable to me. Theoretical Claims: N/A Experimental Designs Or Analyses: They use standard tasks in unsupervised RL (Ant, PointMaze, and AntMaze), which look reasonable to me. Supplementary Material: I confirmed that the authors provided the code in the supplementary material. Relation To Broader Scientific Literature: This paper tackles unsupervised skill discovery, and the main takeaway (to me) is that a non-uniform skill sampling distribution can lead to better exploration. This is often overlooked in the previous literature, as prior works mostly employ a symmetric, uniform distribution (Gaussian, uniform over a box, etc.). I believe this is a nice insight to the community. Essential References Not Discussed: One particularly related missing work is "TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations" by Bae et al. (2024). Their method is also based on METRA and is evaluated mainly on AntMaze. While I don't believe the comparison with this method is necessary in assessing this paper, it'd have been better if the authors had (at least) discussed this work in the related work section. Other Strengths And Weaknesses: ### Strengths * The concept of using a learned skill sampling distribution seems (relatively) novel to me (at least it has not been well-studied in the literature). The use of regrets seems quite sensible to me as well. * Figure 5 is particularly convincing to me. The authors convincingly demonstrate that RSD leads to better diversity and state coverage than METRA by having a non-uniform skill sampler. * The paper provides diverse analyses studying different aspects of RSD training. ### Weaknesses * In my opinion, the main weakness of this paper is writing. * Section 3 does not flow very well. It directly starts with the detailed description of $\pi_{\theta_1}$ and $\pi_{\theta_2}$ and presents the full algorithm box without motivating or defining the notations. In particular, the "types" of the policies are never defined -- I later realized that $\pi_{\theta_1}$ outputs actions and $\pi_{\theta_2}$ outputs $z$s, which confused me a bit. * Equation (8) is also quite confusing. It is unclear how $\theta_2$ affects the objective at this point -- it turns out only in Section 3.3 that $P_z$ depends on $\pi_{\theta_2}$. Also, why does $\pi_{\theta_1}$ minimize this objective? Isn't it also supposed to maximize $V_{\pi_{\theta_1}}^k$ (treating $V_{\pi_{\theta_1}}^{k-1}$ as a constant)? * I believe Section 3.2 and Section 3.3 should come before Section 3.1 (or at least the paper should be heavily restructured in general) as $Q$ and $V$ depend on the reward function defined in Section 3.2. * Equation (11) needs a better explanation. While I understood this, it might be unclear why "a well-learned skill trajectory in the bounded space must hold" the equation below to the general reader. * L193: "Due to the bounded nature of $\phi( \cdot )$, it naturally satisfies the Lipschitz constraint.": Boundedness and Lipschitzness are separate concepts. For example, the Heaviside step function is bounded but not Lipschitz (and not even continuous). * Regarding experiments, the domains are limited to PointMass and Ant. In its current form, it is unclear how RSD scales to pixel-based observations or other types of environments (e.g., whole-body humanoid control, robotic manipulation, etc.). Other Comments Or Suggestions: I spotted quite a lot of grammatical errors and inconsistencies throughout the paper. I'd recommend running a spell/grammar checker. To list only a few: * L145: "for agent policy" -> "for the agent policy" * L148: "These improvements ensures" -> "These improvements ensure" * L245: "denotes maximum size of" -> "denotes the maximum size of" * L7 of Algorithm 1: Why is $\langle, \rangle$ used instead of $(, )$ to denote tuples as in L72? * Equation (2) uses $L$ but Equation (3) uses $\mathcal{L}$. Moreover, this notation is not defined (and it's Lipschitz w.r.t. which metric?). Questions For Authors: * Is there a mechanism to prevent a collapse of $\pi_{\theta_2}$? For example, it can collapse to a single point (e.g., $(1, 1, \cdots, 1)$) to maximize $d_z$, and keeping this collapsed skill sampler policy for a while would maximize $d_\theta$ as well. If this specific skill leads to high regrets, then (hypothetically) the agent might end up learning only a single skill. Is there a reason why this doesn't happen in general? * Unlike METRA, $z_\mathrm{updated}$ in Equation (10) depends on the current latent state $\phi(s_t)$. How does this affect skill learning? Can the authors elaborate on the pros and cons of this choice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## **Q1: Collapsing Concern** Thank you for raising this insightful question. Our method is specifically designed to prevent skill collapse. As shown in Eq. (15), the skills are maintained within a population $P_z$. We address the diversity concern (i.e., avoiding convergence to a single point) from three perspectives: 1. **KL divergence constraint (Eq. 16):** We enforce a divergence between the newly generated skill distribution $\pi_{\theta_2}$ and existing skills in $P_z$ using KL divergence. This ensures new skills remain distinct. 2. **Minimum variance and overlap check:** The sampler $\pi_{\theta_2}$ (Gaussian) is constrained to maintain a minimum std (1e-1). We also check for distributional overlap before adding new skills, preserving diversity. 3. **Empirical validation (Figure 7):** Figure 7 shows a consistent decrease in average regret, mainly driven by SAC. As SAC improves, $\pi_{\theta_2}$ keeps adapting, avoiding collapse to a single mode. Please let us know if further clarification is needed! ## **Q2: State-dependent Skill $z_{\text{updated}}$** Thank you for your careful observation. - When $z$ is fixed across time, Eq. (4) tends to force straight-line trajectories. In skill-asymmetric environments, skills often share key states, making this problematic (Figure 5). - Allowing $z_{\text{updated}}$ to vary with $\phi$ and time $t$ enables distinct **trajectory curves**, even with shared segments, better capturing behavioral nuances. **Advantages:** Improves **skill diversity** (Figures 5 & 6), better covers complex regions (e.g., maze corners), and outperforms METRA in skill expressiveness. **Disadvantages:** Increases **learning complexity**, as seen in Figure 3(a), where RSD is slightly less efficient in simpler environments. ## **Discussion on Related Work** We appreciate the reviewer highlighting TLDR[Bae et al., 2024], which is cited in Lines 303, 329, and Appendix A.1 (L608). Our method focuses on skill sampling scheme, whereas TLDR relies on RND and KNN-based exploration strategies. We will clarify this in the revised related work section. ## **Weakness Response** ### 1. Clarity of Policies $\pi_{\theta_1}$ and $\pi_{\theta_2}$ We revised the start of Section 3 using a bullet-point format to clearly distinguish. We believe this change helps readers immediately grasp the roles of each policy. ### 2. Organization of Section 3 Our intention was to follow a “global-to-local” structure for clarity. - Base reward is defined earlier in **Preliminaries (Eq. 4)**. - We were concerned that Keeping Section 3.1 ahead preserves motivation for why $\pi_{\theta_1}$ minimizes and $\pi_{\theta_2}$ maximizes regret. ### 3. Why $\pi_{\theta_1}$ Minimizes Regret (Eq. 8) Great point — we clarify: - Regret is estimated via Eq. 7. - If $\pi_{\theta_1}$ has converged for skill $z$, regret (value improvement) is near zero. - Thus, minimizing regret implies convergence under that skill. Note: $\pi_{\theta_1}$ is still trained with standard SAC loss (Eq. 13, Alg. 1 line 10). ### 4. Intuition Behind Eq. (11) For example, if $\||\phi(s_T)\|| \leq 1$ and we set $\phi(s_0) = 0$, then by telescoping: $ \|| \sum_t^{T-1} \phi(s_{t+1}) - \phi(s_t) \|| \leq 1$. To simplify, we enforce Eq. (11), and one solution is: $\|| \phi(s_{t+1}) - \phi(s_t) \|| \leq 1/T$. ### 5. Additional Results We added experiments in the **Kitchen environment** (pixel observations, robotic control), following **METRA**’s setup and reporting success out of 7 tasks. As shown in the tables below, our method shows both **higher sample efficiency** and **stronger final performance**. Interestingly, at **dim = 32**, our method exhibits **faster efficiency gains** (e.g., Δ@300k = **+0.51**) compared to **dim = 24** (Δ@300k = **+0.36**), suggesting that our efficient approach benefits more from a larger latent space. This trend is even clearer when visualized in the learning curves. #### Performance @ Skill Dim = 24 | **Model** | **Step = 100k** | **Step = 200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| | *METRA* | **2.72 ± 0.19** | 3.20 ± 0.27 | 3.93 ± 0.67 | 3.94 ± 0.36 | | | *Ours* | 2.41 ± 0.02 | **3.61 ± 0.02** | **4.29 ± 0.11** | **5.08 ± 0.45** | **+1.14** | #### Performance @ Skill Dim = 32 | **Model** | **Step = 100k** | **Step = 200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| | *METRA*| 3.21 ± 0.71| 3.66 ± 0.69| 3.83 ± 0.90| 3.99 ± 0.94 | | | *Ours*| **3.51 ± 0.23** | **3.91 ± 0.43** | **4.34 ± 0.39** | **4.55 ± 0.35** | **+0.56** | ### 6. Grammatical Errors We have addressed the mentioned issues. For the Lipschitz constraint, METRA uses L2 in the paper, though their code supports L1. Line 193 has also been revised for rigor. Thank you for your time and for recognizing our work! Due to space constraints, some explanations have been shortened — please feel free to reach out if any clarification is needed. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My main concerns have mostly been addressed. --- Reply to Comment 1.1.1: Comment: Thank you again for your recognition of our work! Your positive feedback means a lot to our team. We truly appreciate your time and thoughtful review. Wishing you all the best in your research and everyday life!
Summary: This paper propose a new unsupervised skill discovery method, which use regret to guide the skill sampling and skill policy learning. The regret is computed by the estimation error of value function in learning. Based on that, sampling strategies is not a parameters-constant distributions as previous methods. the paper claims the skill policy can prioritizes the exploration of under-converged skills during policy learning. This method improves the learning efficiency and increases skill diversity in simulation environment. Claims And Evidence: The claim of "incorporating regret awareness into skill discovery can enhance the learning efficiency" is supported by the experiment in Figure 3, with more higher Unique Coordinate measure during training. Methods And Evaluation Criteria: The regret-awared methods is well fit in unsupervised skill discovery problems. Theoretical Claims: This paper mainly focus on emperical results and no major issues of theroerm detected. Experimental Designs Or Analyses: The experiments conducted in the DM Control and D4RL environments are well-designed, with statistical significance reported, providing confidence in the results. The visualization experiment is also clear. However, the experimental settings primarily focus on state-based environments, while the baselines compared like RSD and METRA, include more complex environments with pixel-based observations as input. This discrepancy makes it difficult to definitively conclude that the proposed method outperforms these baselines across a broader range of scenarios. Particularly, the reported sensitivity in skill generation (from Section 3.3) to hyperparameters raises concerns about the robustness of the method. This instability could limit its reliability and generalizability across a broader range of scenarios beyond the state-based environments tested. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The paper builds on skill discovery RL methods like compared baselines LSD, RSD, METRA. The "regret bonus" is a well-known concept in exploration RL setting. Essential References Not Discussed: Some reference I know are related to unsupervised skill discovery setting but seem not included in this paper: [1] Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills. Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i-Nieto, Jordi Torres [2] Behavior Contrastive Learning for Unsupervised Skill Discovery Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li [3] Choreographer: Learning and Adapting Skills in Imagination Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Alexandre Lacoste, Sai Rajeswar Other Strengths And Weaknesses: Strengths: 1. This paper is well-written and easy to read. 2. The regret-aware idea for skill discovery is novel. Weaknesses: 1. Please see Experimental Designs Or Analyses above. Other Comments Or Suggestions: Consider adding more experiments to strengthen applicability, e.g. pixel-based version of Kitchen as suggested by METRA^[1], or Unsupervised Reinforcement Learning Benchmark^[2]. [1] METRA: Scalable Unsupervised RL with Metric-Aware Abstraction. Seohong Park, Oleh Rybkin, Sergey Levine. [2] URLB: Unsupervised Reinforcement Learning Benchmark. Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel Questions For Authors: 1. How to caculate the Unique Coordinates in y-axis of Figure 3? it’s unclear whether the score (presumably represented on the y-axis) shows marginal improvement over the baselines or iterations. Could you clarify the calculation process and whether the trend indicates meaningful improvement? A detailed response would help me assess the significance of the reported results and could influence my evaluation of the method’s effectiveness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. ### Q1: Unique Coordinates Metric 1. **Origin of the Metric**: The metric is inspired by the *Policy State Coverage* measure introduced in the METRA paper (Section 5.3), where it was used to evaluate the **spatial coverage** of skill policies—referred to as x-y Coverage in the Maze environment. In our work, we rename it to *Unique Coordinates* to improve specificity, as noted in Section 4.1 (line 312). 2. **Definition**: The metric essentially counts the number of unique 2D coordinates visited by the agent during skill executions. We extract the x and y coordinates from states (hence the name Coordinates), then count the number of distinct positions (hence using Unique). 3. **Computation**: For each algorithm: - Extract trajectory x-y states for each skill; - Discretize the x-y values; (e.g., rounded to integers); - Count the number of unique coordinate pairs across all rollouts. 4. **Significance**: This metric reflects how diversely the skills explore the state space. As supported in prior work, reaching distinct regions of the environment implies higher skill diversity(Figure 4). We will include this clarification in the revised version. --- ### Q2: Additional Experiments 1. **Kitchen**: It uses **pixel-based observations** and involves **manipulation** tasks, making it different from Ant-based settings. 2. **Motivation**: Following ([issue](https://github.com/seohongpark/METRA/issues/9#issuecomment-2565876012)), pixel-based tasks are indeed computationally demanding. We prioritized the Kitchen environment as it is both challenging and practically meaningful under time constraints. 3. **Experimental Design**: Following METRA’s setup, we report **the number of completed** tasks (7 in total) using **mean ± std** over three seeds {0,2,4}. #### Performance @ Skill Dim = 24 |**Model**| **Step = 100k**| **Step=200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| |METRA|**2.72±0.19**|3.20±0.27|3.93±0.67|3.94±0.36|| |Ours|2.41±0.02|**3.61±0.02**|**4.29±0.11**|**5.08±0.45**|+1.14| #### Performance @ Skill Dim = 32 |**Model** | **Step = 100k** | **Step = 200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| |METRA|3.21±0.71|3.66±0.69|3.83±0.90|3.99±0.94|| |Ours|**3.51±0.23**|**3.91±0.43**|**4.34±0.39**|**4.55±0.35**|+0.56| Our method shows both **higher sample efficiency** and **stronger final performance**. At **dim = 32**, our method exhibits **faster** efficiency gains (e.g., Δ@300k = **+0.51**) compared to **dim = 24** (Δ@300k = **+0.36**), suggesting that our efficient approach benefits more from a larger latent space. This trend is even clearer when visualized in the learning curves. --- ### Q3: Hyperparameter Sensitivity and Stability 1. **Ablation**: We include ablations (Appendix E) for critical hyperparameters, such as the regret window size (`window`) and regularization weights (`alpha_1`, `alpha_2`). 2. **Tuning**: We argue that hyperparameter tuning in our method is **guided rather than purely search-based**, as they primarily influence skill diversity. For instance, the KL divergence term in Eq. (16) provides a direct and interpretable signal for adjusting `alpha_1`. 3. **Empirical Settings**: - **Ant**/**Maze**: Used a fixed setting (`alpha_1=5`, `alpha_2=1`, `window=15`) across experiments. - **Kitchen**: Used lighter settings (`alpha_1=1`, `alpha_2=0`, `window=8`) due to inherent diversity. While we also considered adaptive schemes (e.g., Lagrange multipliers), we avoided them to maintain a lightweight design. --- ### Q4: Related Work 1. **Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills** This paper is cited in our Appendix B. It shares similar experimental setups with ours. However, we did not include it as a primary baseline because it focuses on simpler 2D navigation tasks and does not scale easily to high-dimensional settings. 2. **Behavior Contrastive Learning for Unsupervised Skill Discovery** Thank you for the recommendation. Despite conceptual similarity, it lacks demonstration in high-dimensional environments. We also tried contrastive losses but observed unstable learning, potentially due to conflicting gradients in METRA. We acknowledge this as a promising direction. 3. **Choreographer: Learning and Adapting Skills in Imagination** An insightful suggestion. The key differences are: - It's a **model-based** RL approach, which increases sample efficiency but at a higher computational cost; - It's **exploration-agnostic**, relying on separate exploration strategies (as noted in their Appendix A), whereas our method integrates exploration directly into the skill learning process. Although these works tackle skill discovery, we address different challenges. We will add these references to our revised Related Work section. We hope these resolve your concern! --- Rebuttal Comment 1.1: Comment: Thank you for your response. The additional detailed information has addressed my concerns regarding certain definitions in this paper. The preliminary experiment comparing METRA with pixel-based observations shows promise; however, I believe the next version would benefit from more thorough experiments and in-depth analysis. Additionally, the paper should include further content and analysis on this topic. Based on these considerations, I have decided to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for recognizing the promise of our work. We believe that the clarifications and additions made during the rebuttal process have significantly strengthened the paper, both in terms of clarity and technical contribution. We are confident that, with these improvements, the paper presents a meaningful and timely contribution to the community. Although you decided not to increase your score, we sincerely appreciate your thoughtful review and the time you dedicated to our submission.
Summary: The paper presents Regret-aware Skill Discovery (RSD), a novel approach to unsupervised skill discovery in reinforcement learning. The authors conceptualize skill discovery as a min-max adversarial game between skill generation and policy learning. Their key insight is that skill discovery should be guided by policy strength convergence - focusing exploration on skills with weak, unconverged strength while reducing exploration for skills with already converged strength. To implement this, they use a regret measure to quantify policy strength convergence and employ a learnable skill generator within a population-based framework. Claims And Evidence: The main claims about RSD outperforming baselines in terms of efficiency and diversity are generally supported by the experimental results presented in the paper. The authors show performance comparisons across multiple environments (Ant, Maze2d-large, Antmaze-medium, Antmaze-large), demonstrating improved coverage metrics and zero-shot performance. However, the evidence would be stronger with clearer reporting of statistical significance - the paper lacks information about the number of independent seeds for experiments and doesn't provide error bars or uncertainty measurements for the reported results in Table 1. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable for the problem of unsupervised skill discovery. The authors use appropriate metrics like state coverage (CoverCoords) to measure skill diversity and zero-shot performance to assess skill utility. However, the CoverCoords metric could be more interpretable if reported as percentages of achievable coordinates on the map rather than absolute values. The zero-shot experiment look a bit artifical to me: it shows improved results for RSD, but given that RSD achieves higher coverage (CoverCoords), it naturally has better chances of reaching goals. Theoretical Claims: Not applicable (there are no theoretical claims) Experimental Designs Or Analyses: I checked the experimental design for comparing RSD against baseline methods. While the experimental setup appears sound overall, there are several omissions: 1. The number of independent seeds run for each experiment is not specified 2. Statistical significance of results is not reported (no error bars in Figure 3 or uncertainty measures in Table 1) 3. There are inconsistencies in the reported timescales (Figure 3 shows results for 5e6 timesteps for Maze2d-large, while Figure 7 shows results for 1e7 timesteps for the same environment) 4. The AntMaze2D results mentioned in Appendix D appear to be missing. Supplementary Material: I read all Appendix sections. But I did not review nor execute the provided code. Relation To Broader Scientific Literature: The paper positions RSD appropriately within the unsupervised skill discovery literature, building upon temporal representation learning approaches like METRA. The authors clearly acknowledge and compare to previous work on mutual information-based skill discovery (DIAYN, DADS, LSD, METRA). They also explain how their approach differs by focusing on the relationship between skills and policy learning strength. The regret-aware optimization bears some resemblance to concepts from Learning Process, which the authors acknowledge. Essential References Not Discussed: To the best of my knowledge, all the relevant related works are cited/discussed in the paper. Other Strengths And Weaknesses: Strengths: - The paper is well-organized and clearly written. - The algorithm is well-illustrated with helpful figures that explain the approach. - The experiments demonstrate meaningful improvements over existing methods. - The bounded representation learning approach appears to be a useful contribution. Weaknesses: - The authors claim RSD is a framework, but it's only implemented on top of METRA. To substantiate this claim, they should demonstrate applicability to other base algorithms like LSD. - Ablation studies are relegated to the appendix (Appendix E) without being properly referenced in the main text. Other Comments Or Suggestions: Typos and presentation issues: - Algorithm 1, line 10: should be $\pi_{\theta_1}$ instead of $\pi_{\theta_2}$ - Line 216: extra "the" - Line 236, 2nd column: missing space before "Therefore" - Figure 6: "Sapce" instead of "Space" - Line 328: "DIYAN" instead of "DIAYN" - Line 312, 2nd column: "the both" is awkward phrasing - Line 380: "RSDlearns" should have a space - Line 345, 2nd column: "Maeze2d-large" instead of "Maze2d-large" - The colors in Figure 6 make it difficult to read Questions For Authors: 1. How many independent seeds were run for each experiment? What do the bold lines and shaded areas represent on the plots in Figure 3? What do the numbers in Table 1 represent (averages, medians)? What are the uncertainties in these measurements? 2. You claim RSD is a framework applicable beyond METRA. Have you tested applying it to other skill discovery algorithms such as LSD? If so, what were the results? 3. In Figure 6, how many timesteps does an "epoch" represent? 4. Why are the results in Figure 7 presented for 1e7 timesteps, whereas for the same environment in Figure 3, the results are presented for 5e6 timesteps? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. Below we address each point in detail. ### **Q1. Statistical Reporting** We appreciate the reviewer’s emphasis on statistical rigor. All experiments were conducted using **five independent random seeds**: `{0, 2, 4, 8, 16}`. - In **Figure 3**, the **bold lines** indicate the **mean**, and the **shaded areas** represent the **standard deviation** across these runs. - In **Table 1**, we originally reported the **mean performance** only. In the revised version, we will **add uncertainty measures** to each entry. For example, in the *Maze2d-large* environment: ``` RSD (ours): FD↓ = 0.535 ± 0.152, AR↑ = 0.811 ± 0.046 METRA-d: FD↓ = 0.658 ± 0.154, AR↑ = 0.782 ± 0.043 ``` These results demonstrate that RSD achieves competitive performance with **comparable variance**, supporting the robustness of our method. ### **Q2. Applicability Beyond METRA** Thank you for the insightful question regarding generality. We certainlly plan to explore this framework with other models (for example in VLA) in future work. Our claim that RSD is a **general framework** refers to the **min-max optimization formulation in Equation 8**, which is **algorithm-agnostic** and, in principle, applicable to a wide range of **unsupervised skill discovery (USD)** methods. In this work, we chose to **implement RSD on top of METRA** rather than LSD. This is because **METRA**, an improved version of LSD, provides **key theoretical properties** (Equation 3) that we leverage. In Section 3.2 (Equations 2 and 13), we introduce **targeted modifications** to enhance performance in skill-asymmetric environments. ### **Q3. Epochs in Figure 6** We appreciate the reviewer’s comment regarding Figure 6. The relationship between **epochs and timesteps** is reflected in **Algorithm 1 (lines 2 and 5)**: ``` timesteps = epoch × num_trajectories × max_length ``` As described in **Appendix C.1**, we use: - `num_trajectories = 16` - `max_length = 300` So the approximate mapping is: ``` 4000 epochs → 2e7 timesteps 8000 epochs → 4e7 timesteps 14000 epochs → 7e7 timesteps 18000 epochs → 9e7 timesteps ``` In the revised version, we will **redraw Figure 6** with **explicit timestep annotations** and **improve color contrast**, as suggested, to improve readability. ### **Q4. Timestep Scales in Figures 3 and 7** Thank you for noting the difference in timestep scales. This design choice is **intentional**, as the two figures serve **different purposes**: - **Figure 3** highlights **early-phase learning efficiency** (shown up to **5e6** timesteps), with focus on the **0–2e6** range. - **Figure 7** illustrates **long-term convergence and stability**, plotted up to **1e7** timesteps. To avoid confusion, we will **explicitly clarify this** in the caption of Figure 7 in the final version. ### **Additional Comments** #### 1. Ablation Study Referencing We agree that the ablation study contains important insights. In the revised version, we will: - **Explicitly reference Appendix E in the main text** - **Summarize key findings** to highlight contributions of individual components #### 2. Missing AntMaze2D in Appendix D Thank you for catching this. **Appendix D contains visualized results** for experiments Table 1. We will: - **Clarify this reference in the main text** - Add **explanatory notes in Appendix D** to improve readability #### 3. Zero-shot Evaluation Concern We appreciate your thoughtful concern. The goal of our zero-shot experiment is to evaluate **practical utility**. Indeed, **RSD achieves superior zero-shot performance partly due to higher coverage** — which we see as a **desired property**. The improved zero-shot success rate is a **direct consequence of more effective skill discovery**, which we believe substantiates our central claim. ### **Typos and Presentation Issues** We thank the reviewer for the attention to detail. All identified **typos and presentation issues have been carefully corrected** in the revised version to improve clarity and presentation quality. Once again, we sincerely appreciate the reviewer’s time and valuable, constructive feedback. We believe it has significantly enhanced the clarity, rigor, and overall quality of our paper. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response, my concerns have been addressed. Given the paper's novel contribution and comprehensive baseline comparisons, I believe the paper should be accepted, and I've updated my score accordingly. Nonetheless, I agree with other reviewers that comparing RSD on more tasks would make paper stronger.
Summary: This paper is on unsupervised skill discovery within the context of Markov decision processes. It builds on a collection of earlier papers that aim to learn diverse and distinguishable behaviours, for example, DIAYN by Eysenbach et al., (ICLR 2019). The contribution of this paper is a new algorithm, which the authors call Regret-Aware Skill Discovery (RSD). Conceptually, their main idea is to prioritise the exploration of skills the authors call "under-converged" rather than indiscriminately exploring the whole skill space. The authors present an experimental evaluation of their approach. Claims And Evidence: I find that this paper is not clearly written. It is difficult to pinpoint exactly what the claims are. One claim seems to be that "skill discovery should be grounded by the policy strength" (page 1). I am not fully clear on what is meant by policy strength. In addition, given such a broad claim about skill discovery, the empirical evaluation should present evidence from a broad collection of skill discovery algorithms, in a broad range of problem domains; this is not the case in the paper. A second claim is that the proposed method "significantly improves the learning efficiency and increases skill diversity in complex environments" and "achieves up to a 16% improvement in zero-shot evaluation compared to baselines". The support for these claims comes from a rather narrow experimental evaluation in terms of baseline algorithms considered and domains tested (details below) compared to the general practice in the literature (see, for example, METRA by Park et al, ICLR 2024). Methods And Evaluation Criteria: The experiments are quite narrow in their scope. The authors can provide stronger support for their claims by evaluating their approach in a border range of domains and by comparing to a broader set of baseline algorithms. Currently, all experiments in the paper have been performed in the ant environment. In terms of algorithms, the baselines tested in the paper are only those that are very closely related to the proposed approach. If the claim is a new approach that "significantly improves the learning efficiency", a broader set of algorithms should be tested as baselines, for example, those that use intrinsic rewards. Theoretical Claims: N/A Experimental Designs Or Analyses: The primary issue with the experimental design is its narrow scope. Supplementary Material: No. Relation To Broader Scientific Literature: The paper introduces a new approach within a particular line of research on unsupervised skill discovery. I am not able to judge its significance based on the relatively narrow analysis in the paper. Essential References Not Discussed: Nothing particular. Other Strengths And Weaknesses: The paper may have a useful contribution to make but requires a more clear statement of its claims and a more rigorous evaluation of those claims. Other Comments Or Suggestions: Define Phi on line 131. Questions For Authors: Nothing particular. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ### Q1: Motivation of "Policy Strength" 1. **Definition:** The term *policy strength*, derived from "the strength of the policy", refers to the **capability** of a policy to accomplish its designated objective—corresponding to **a specific skill** in our context. 2. **Why we use "policy strength":** In this work, we adopt the term *policy strength* to concisely describe the **capability of the policy under skill-conditioned training**. More specifically, we are interested in how well the policy can be optimized given a particular skill, and whether it has **converged**, or **how far it is from convergence**. This perspective helps us assess whether a skill has been sufficiently explored or still requires further training attention. 3. **Motivation and necessity:** Initially, we considered using phrases like *"the performance of the policy"*, but we were concerned that *performance* might be misinterpreted as referring to the cumulative reward. This can be misleading in unsupervised skill discovery, where intrinsic rewards are not always directly comparable between different skills. In contrast, *policy strength* focuses on the **optimization dynamics** and **convergence behavior** of the policy, thus avoiding ambiguity tied to reward metrics. We acknowledge the term may be unconventional and will revise the introduction to clearly define it. --- ### Q2: More Experimental Evaluation Thank you for encouraging broader validation. 1. **Choice of Maze Environment** We extend METRA’s evaluation by including **Maze**, a more challenging setup for skill discovery [1]. As seen in *Figure 5*, METRA underperforms here due to the **skill-asymmetry issue**, which our method specifically targets.Additionally, we assess **zero-shot performance** on downstream tasks, following standard protocol. 2. **Evaluation on the Kitchen Environment** In response to your concern, we additionally include experimental results on the **Kitchen**: - **Kitchen Environment**: Kitchen involves **pixel-based observations** and more complex robot-**manipulation tasks**, which are evaluated in METRA. - **Why Kitchen**: As noted in [this issue](https://github.com/seohongpark/METRA/issues/9#issuecomment-2565876012), pixel-based tasks are indeed **computationally demanding**. We chose Kitchen for its practical challenge and feasibility under limited time. - **Experimental Setup**: Following METRA’s setup, we measure the **number of completed tasks** (7 total) using **mean ± std** over three seeds `{0, 2, 4}`. We evaluate across different training steps and skill dimensions (`dim=24, 32`). #### Performance @ Skill Dim = 24 |**Model**| **Step = 100k**| **Step=200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| |METRA|**2.72±0.19**|3.20±0.27|3.93±0.67|3.94±0.36|| |Ours|2.41±0.02|**3.61±0.02**|**4.29±0.11**|**5.08±0.45**|+1.14| #### Performance @ Skill Dim = 32 |**Model** | **Step = 100k** | **Step = 200k** | **Step = 300k** | **Step = 400k** | **Δ@400k** | |:-:|:-:|:-:|:-:|:-:|:-:| |METRA|3.21±0.71|3.66±0.69|3.83±0.90|3.99±0.94|| |Ours|**3.51±0.23**|**3.91±0.43**|**4.34±0.39**|**4.55±0.35**|+0.56| - **Results Summary**: Our method outperforms METRA in both **sample efficiency** and **final performance**. At **skill dim = 32**, our method exhibits greater gains with training progress (e.g., Δ@300k = +0.51 vs. +0.36 at dim=24). This trend is even clearer when visualized in the curves. --- ### Q3: Baseline Selection 1. **Baseline Selection** Our method is built upon METRA’s framework. Our primary contribution lies in **enhancing the sampling efficiency** by incorporating ideas from **Prioritized Level Replay (PLR)**, enabling more effective skill discovery within the METRA setup. 2. **Why we do not compare against intrinsic reward-based methods** As discussed in **Appendix A.1**, we chose not to include other intrinsic reward methods for the following reasons: - **RND-based** methods improve exploration but are **not tailored for skill discovery**. - **KNN-based** methods are limited in high-dimensional representation space. Crucially, our focus is different: we optimize **skill distribution** during training using a regret-driven min-max formulation, unlike the uniform sampling in METRA and related work. We hope this explains our baseline decisions. --- ### Q4: Define $\phi$ on Line 131 We will add the following definition to improve clarity: > A representation function $ \phi: \mathcal{S} \rightarrow \mathbb{R}^d $, which maps states into a skill-aligned latent space that shares the same dimensionality as the skill variable $ z \in Z $. --- [1] Campos, Víctor, et al. "Explore, discover and learn: Unsupervised discovery of state-covering skills." *International conference on machine learning*. PMLR, 2020.
null
null
null
null
null
null
ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals
Accept (spotlight poster)
Summary: This paper introduces ResQ, a post-training quantization (PTQ) method for large language models (LLMs) that enables mixed-precision quantization of weights, activations, and KV caches. Experimental results demonstrate that ResQ achieves superior performance compared to existing methods. Claims And Evidence: The paper claims the development of custom CUDA kernels as a main contribution. However, this claim is not adequately supported. There is no detailed description of these kernels in the manuscript, and the anonymous code link does not appear to contain their implementation or sufficient details to verify this claim. Furthermore, the fast hadamard transformation is well-established in prior work (e.g., https://github.com/Dao-AILab/fast-hadamard-transform). The paper does not clearly propose any novel contribution. Methods And Evaluation Criteria: The choice of mixed-precision quantization and a rotation-based approach is reasonable, given the established effectiveness of both techniques in addressing quantization challenges. However, the novelty of combining these specific approaches is not fully confirmed. Theoretical Claims: The proofs for the theoretical claims in this paper are correct. Experimental Designs Or Analyses: The experimental design generally follows established practices in the field. Supplementary Material: The supplementary material gives more detailed proofs and experimental results. Relation To Broader Scientific Literature: The key contributions of the paper do not appear to introduce significant new findings or ideas compared to the existing scientific literature. Essential References Not Discussed: The paper appears to comprehensively cover the relevant related works and does not seem to have omitted any essential references. Other Strengths And Weaknesses: A primary concern is the limited novelty of the proposed approach. While the paper combines mixed-precision quantization and a rotation-based method, this combination lacks significant new insights. The work feels more like a technical report demonstrating the application of existing techniques rather than a substantial research contribution. Other Comments Or Suggestions: A significant claim made in the paper is the development of custom CUDA kernels for performance acceleration. However, this claim is not supported by sufficient evidence within the paper. There is no dedicated section or subsection detailing the design, implementation, or optimization strategies employed in these kernels. Questions For Authors: Figure 4 in the submission appears to be highly similar (**only color changed?**) in structure and content to Figure 1 in SpinQuant paper (https://arxiv.org/pdf/2405.16406). Could the authors clarify the relationship between these two figures and explain any differences or novel aspects of Figure 4 in the current work? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you, Reviewer Azbj, for putting effort in reviewing our paper. We provide a response to your concerns below. 1. **Limited Novelty:** While we agree with the reviewer regarding the statement that ResQ is a rotation and mixed precision quantization based approach, we respectfully disagree regarding the novelty aspect of our work. To the best of our knowledge, there is no work which does what ResQ does and we would be interested if the reviewer can point us to the related paper. In Table 4 of main paper, we also show that ResQ outperforms a hypothetical outlier+rotation based approach which is an amalgamation of related baselines QUIK (EMNLP2024) and QuaRot (NeurIPS2024). This highlights optimality of PCA based high precision component extraction which again to the best of knowledge has not been explored before in literature. Additionally, we provide even more comprehensive comparison in the Table 10 below where we compare ResQ and outlier+rotation baseline's Reasoning and MMLU accuracy in addition to Wikitext perplexity for Qwen2.5 models. Emphasizing that **ResQ outperforms the outlier+rotation baseline across the stack**. Based on these insights we humbly request the reviewer to re-evaluate their stance on novelty of our work. Table 10: Comparison of ResQ with outlier+rotation approach. | Model | Method | Wiki. PPL | Avg. Reasoning acc. | Avg. MMLU acc. | |:---:|:---:|:---:|:---:|:---:| | Qwen2.5-3B | outlier+rot. | 9.4 | 58.4 | 59.2 | | | ResQ | **9.0** | **61.1** | **61.2** | | Qwen2.5-7B | outlier+rot. | 10.5 | 64.1 | 64.9 | | | ResQ | **8.2** | **65.3** | **69.0** | | Qwen2.5-14B | outlier+rot. | 6.4 | 68.0 | 73.8 | | | ResQ | **6.2** | **69.2** | **74.6** | Detailed task-wise results for the above table can be found [here](https://shorturl.at/KTiZR) in Table 4. 2. **CUDA Kernel:** Thank you for highlighting this point. We will include additional details about the CUDA kernel in the main paper and plan to release the code upon acceptance. The CUDA Kernel involves, the mixed precision quantization of activations into 4/8-bit components within a single kernel, low precision GEMM of 4-bit and 8-bit operands and a fused dequantization of 4-bit and 8-bit results. Further, the hardware implementation involves quantizing and packing the mixed precision kv cache for memory efficiency. Additionally, we provide several new hardware-related results to address the reviewer's concerns: improved memory usage with ResQ (Table 5, response to Reviewer vpAp), speedup achieved by ResQ on long context lengths (Table 2, response to Reviewer UnBL), inference latency in a distributed setting representative of datacenter workloads (Table 7, response to Reviewer vpAp), and a comparison of ResQ with a baseline quantization kernel (Table 5, response to Reviewer vpAp). We hope these additions comprehensively address the concerns regarding the CUDA kernel. 3. **Figure 4:** ResQ and SpinQuant project weights and activations by projection matrices in similar manner. While SpinQuant trains the projection matrices and utilizes uniform precision quantization across the layers, ResQ uses PCA basis and random orthogonal rotations as projection matrices and performs mixed precision quantization. The key differentiation in Figure 4 is the layers and activations which are quantized to mixed precision. For ResQ all the layers and activations except down\_proj layer is mixed precision.
Summary: The paper presents ResQ, a mixed-precision post-training quantization (PTQ) method for large language models (LLMs). The core idea of ResQ is to compute the orthogonal transformations using PCA and decompose the orthogonal matrices for high-precision and low-precision based on their corresponding eigenvalues. Moreover, high-precision weights and activations are cast in 8-bit. Experiments demonstrate that ResQ outperforms strong baselines such as SpinQuant, QuaRot, and QUIK. **Update after rebuttal**: My latest reply reflected my final update. Claims And Evidence: [Correct Claims] * Custom cuda kernels to speed up inference. * Theoretical analysis on PCA-based projections. * ResQ outperforms recent uniform and mixed precision PTQ methods. -> partially correct and I have some comments in Weakness in Experimental Designs Or Analyses. [Problematic Claims] * In line 29, the paper claims using PCA to compute orthogonal matrices is "provably optimal mixed precision quantization scheme". However, in theorem 4.2, the method is optimal in minimizing layer loss, but it may not be optimal on the final output loss, and the latter is done in SpinQuant. Therefore, it would be great to tune down this claim a bit, and I am curious why ResQ performs better than SpinQuant (please see my comment in Weakness in Experimental Designs Or Analyses). Methods And Evaluation Criteria: * The datasets and baselines make sense. Theoretical Claims: I didn't check the proof of Theorem 4.2. Experimental Designs Or Analyses: [Strength] * The experiments are conducted on multiple models and benchmarks. [Weakness] * For the compared methods, some are uniform and some are mixed-precision quantization, and it is unclear how many additional bits the mixed-precision methods used. There should be a column in the tables about the number of bits for each method for a fair comparison. It is also fairer to use the same bitness for all the approaches, and maybe the authors can adjust the number of bits for weights or the scale and zeros that are introduced in quantization to do it. Supplementary Material: There is no Supplementary Material. Relation To Broader Scientific Literature: * PCA for orthogonal transformation -> related to low-rank decomposition, like LoRA and other rotation methods (QuaRot, SpinQuant). * The method combines rotation and mixed-precision quantization -> which is an interesting combination from previous methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * As the PCA part is the main novelty of this paper, it would be great to have a standalone section or at least a paragraph to introduce it. I spent some time to finally find it is in Sec 4.2. * In line 219, defining how to compute the quantization SNR would be great as it seems to pop out from nowhere here. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you, Reviewer x9DL, for your effort in reviewing our paper. We appreciate your recognition of the strength of ResQ's experimental section. Below, we respond to the concerns you raised. 1. **Claim regarding quantization error:** ResQ's approach of choosing coefficients along principal eigen vectors of activations to keep in 8-bit is indeed **optimal to minimize local quantization error**. Projection with PCA basis $P$ minimizes quantization error across high/low precision groups while random orthogonal matrix $R$ improves quantization error within high/low precision group. We will further clarify this point in the main paper. While SpinQuant learns rotation matrices minimizing final loss, it still keeps all the components in 4-bit which does not minimize quantization error enough while ResQ intelligently performs mixed precision quantization. Moreover, SpinQuant's training of rotation matrices can additionally be incorporated in ResQ as well at the cost of high calibration time. The 8-bit components can be chosen via PCA basis ($P$) and orthogonal matrix to minimize quantization error within 4-bit and 8-bit quantization groups ($R$) can be learned minimizing output loss. Such an approach further improves performance of ResQ as shown in Table 8 below surpassing SpinQuant by even higher amount. Table 8: Performance of ResQ and variant of ResQ which trains rotation $R$. | Model | Method | Wiki. PPL | Avg. Reasoning acc. | Avg. MMLU acc. | |:---:|:---:|:---:|:---:|:---:| | Meta-Llama-3-8B | ResQ | 7.1 | 63.9 | 57.2 | | | ResQ + training $R$ | **7.0** | **64.5** | **58.3** | | Llama-2-7B | ResQ | **5.8** | 62.0 | 37.7 | | | ResQ + training $R$ | **5.8** | **62.2** | **38.0** | | Qwen2.5-7B | ResQ | 8.2 | 65.3 | **69.0** | | | ResQ + training $R$ | **8.0** | **65.8** | **69.0** | | Llama-3.2-1B | ResQ | 12.4 | **50.1** | 29.4 | | | ResQ + training $R$ | **11.7** | **50.1** | **29.6** | Detailed task-wise results for the above table can be found [here](https://shorturl.at/KTiZR) in Table 3. 2. **Iso-bitwidth comparison:** Both ResQ and previous state of the art mixed precision quantization technique, QUIK (EMNLP2024), keep $\frac{1}{8}$ channels in 8-bit which brings average bit-width to 4.5 bits. For other baselines, achieving fractional bit-width is impossible with uniform precision quantization across layers and channels. Additionally, as suggested by the reviewer, we perform comparison at equal bit-width of 4-bit. We create a variant of ResQ which keeps first $\frac{d}{8}$ channels corresponding to lowest eigen value of $P$ in 2-bit, $\frac{d}{8}$ channels corresponding to highest eigen value of $P$ in 6-bit and remaining in 4-bit to achieve average bit-width of 4-bit. As shown in Table 9 below, **even at iso-bitwidth of 4-bit, ResQ outperforms SpinQuant and QuaRot** highlighting its capabilities. Table 9: Iso-bitwidth comparison between ResQ, QuaRot and SpinQuant. | Model | Method | Wiki. PPL | Avg. Reasoning acc. | Avg. MMLU acc. | |:---:|:---:|:---:|:---:|:---:| | Qwen2.5-3B | QuaRot | 68.8 | 47.7 | 28.9 | | | SpinQuant | 70.6 | 48.6 | 32.8 | | | ResQ | **9.8** | **59.1** | **52.2** | | Qwen2.5-7B | QuaRot | 4e3 | 38.4 | 24.1 | | | SpinQuant | 3e3 | 38.6 | 24.3 | | | ResQ | **34.2** | **56.2** | **58.0** | | Qwen2.5-14B | QuaRot | 6.8 | 67.1 | 70.9 | | | SpinQuant | 6.6 | 67.4 | 70.1 | | | ResQ | **6.5** | **67.5** | **71.3** | | Qwen2.5-32B | QuaRot | 6.1 | 67.8 | 77.0 | | | SpinQuant | 6.0 | 67.9 | 77.6 | | | ResQ | **5.9** | **69.1** | **77.9** | | Qwen2.5-72B | QuaRot | **4.9** | 70.3 | 80.1 | | | ResQ | **4.9** | **71.1** | **80.1** | Detailed task-wise results for the above table can be found [here](https://shorturl.at/KTiZR) in Table 5. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the rebuttal contents, which addressed my concerns about fairer bit comparison. I will increase the rating accordingly.
Summary: The paper proposes a novel algorithm to separate high/low values and respectively smooth and quantize them with different precisions. Theoretical analyses suggest that by introducing a designed matrix $P$, the upper bound of the error can be minimized. The experimental results illustrate the effectiveness and efficiency of the proposed method. Claims And Evidence: The claims made are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem. Theoretical Claims: I have not carefully checked the proofs of the theoretical claims. However, the claims make sense and are likely to hold. Experimental Designs Or Analyses: The experiment is well-structured. However, from my perspective, there are two issues that should be addressed: 1. Memory reduction compared to FP16, INT4, and/or other baseline methods should be provided. 2. Is it possible to validate speedup on other NVIDIA GPUs, such as the A100? Supplementary Material: I have briefly reviewed the appendix. Relation To Broader Scientific Literature: The key contributions can be summarized as follows: The proposed method employs PCA-like permutation and rotation matrices to effectively mitigate outliers in WA quantization and enhance performance, surpassing SOTA methods such as QuaRot and SpinQuant. The proposed method is also validated through speedup comparisons, further illustrating its efficiency, leading to the development of (mixed-precision) WAKV quantization. Essential References Not Discussed: To the best of my knowledge, there are no missing references. Other Strengths And Weaknesses: Strengths: 1. The proposed method is novel and efficient. The use of the PCA method addresses the difficulties in separating high/low values in $X$, contributing to the success of the mixed-precision quantization method, which is quite impressive. 2. The experiment is solid and persuasive in supporting the proposed method. The results are strong, and the overhead is acceptable. 3. The paper is well-structured, making it easy for readers to follow. Btw, this is quite an impressive paper I have reviewed. Weakness: Aside from the two issues mentioned in the "Experimental Designs or Analyses" section, there is one more weakness: The authors should try to provide more details regarding the ResQ kernel. Is it a simple combination of INT4 and INT8 kernels with an addition? If so, I believe that developing CUDA kernels alone can hardly be considered a main contribution in the introduction section. Other Comments Or Suggestions: No additional comments or suggestions. Questions For Authors: See weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you, Reviewer vpAp, for your effort in reviewing our paper. We are delighted that you find our approach impressive. We provide response to your questions below. 1. **Details about compute kernel**: The compute kernel goes beyond simple combination of INT4 and INT8 kernels. More precisely, the mixed precision quantization of activations into 4/8-bit components is handled by a single kernel call as opposed to two calls to the quantization kernel. Further, the hardware implementation involves quantizing and packing the mixed precision kv cache for memory efficiency. We will provide more details in the camera ready version of the paper and will release the code upon acceptance of the paper. Additionally, we compare the compute kernel with simple combination of INT4 and INT8 kernels as mentioned by the reviewer below (Table 5 below). **The inference latency with ResQ is upto 1.3x lesser than simple combination of INT4 and INT8 kernels.** Table 5: Per decoder latency (in ms) of ResQ over simple compute kernel. | Model | Seq\_len | Simple | ResQ | Improv. | |:---:|:---:|:---:|:---:|:---:| | Llama-3.2-3B | 512 | 1.8 | 1.3 | 1.33 | | | 8192 | 27.6 | 20.5 | 1.34 | | Meta-Llama-3-8B | 512 | 2.6 | 2.0 | 1.3 | | | 8192 | 40.9 | 31.9 | 1.29 | | Qwen2.5-72B | 512 | 5.9 | 5.0 | 1.16 | | | 8192 | 95.3 | 85.5 | 1.11 | 2. **Memory reduction with ResQ:** We show memory usage of ResQ on RTX 3090 (24GB), FP16 baseline and Quarot (INT4) baseline at different sequence length in Table 6 below. **ResQ consumes 1.84-3.08x lower memory than FP16 baseline** and requires slightly (4-11\%) more memory compared to QuaRot. Further, ResQ supports inference of Qwen2.5-14B where FP16 baseline runs into OOM error. Table 6: Memory of ResQ and baselines on RTX 3090. | Model | seq\_len | FP16 | QuaRot | ResQ | Improv. over FP16 | Improv. over QuaRot | |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Meta-Llama-3-8B | 8192 | 21.9 | 11.4 | 11.9 | 1.84 | 0.96 | | | 2048 | 16.7 | 6.8 | 7.2 | 2.31 | 0.94 | | | 512 | 15.4 | 5.6 | 6.1 | 2.54 | 0.93 | | Llama-2-7B-hf | 8192 | 18.1 | 6.2 | 6.8 | 2.66 | 0.91 | | | 2048 | 13.9 | 4.2 | 4.7 | 2.95 | 0.89 | | | 512 | 12.9 | 3.7 | 4.2 | 3.08 | 0.89 | | Qwen2.5-14B | 8192 | OOM | 19.5 | 21.3 | -- | 0.92 | | | 2048 | OOM | 14.0 | 14.9 | -- | 0.94 | | | 512 | OOM | 12.6 | 13.5 | -- | 0.93 | 3. **Inference on NVIDIA A100**: Our intention with ResQ was to target inference on consumer devices, which is why we report speedups on an RTX 3090 at batch size 1. While datacenter-scale 4-bit LLM inference is valuable future work, we also provide latency results for Meta-Llama-3-70B on NVIDIA A100 server (Table 7 below). Notably, **ResQ enables the 70B model to fit on a single GPU**, whereas the FP16 baseline requires three GPUs. In this setting, ResQ runs data-parallel inference, while FP16 uses model parallelism. ResQ achieves up to 4.98× lower latency across various batch sizes and sequence lengths. More sophisticated model parallel inference approaches like pipeline parallelism will only improve throughput of FP16 baseline but will not improve per batch latency. Table 7: Meta-Llama-3-70B inference latency (in ms) on 3x NVIDIA A100 GPUs. | batch\_size | seq\_len | FP16 | ResQ | Improv. | |:---:|:---:|:---:|:---:|:---:| | 3 | 10240 | 4242 | 20783 | 4.90x | | 3 | 8192 | 3361 | 16373 | 4.87x | | 3 | 4096 | 1609 | 7871 | 4.89x | | 3 | 2048 | 806 | 3888 | 4.82x | | 6 | 2048 | 1560 | 7733 | 4.96x | | 9 | 2048 | 2309 | 11493 | 4.98x | --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal, which has addressed all my concerns. **Overall, I think it is an impressive paper and should definitely be accepted**.
Summary: This paper proposes ResQ, a post-training quantization (PTQ) framework that targets aggressive 4-bit quantization of large language models (LLMs) for weights, activations, and KV caches. The key idea is to identify and preserve a low-dimensional subspace of “important” activation components in higher bit precision (8-bit) while quantizing the remaining channels to 4-bit. Specifically, the method uses PCA to find the top-r principal directions of the activation distribution (one-eighth of the hidden dimension in many cases) and keeps those in 8-bit; the other channels go to 4-bit. Within each subspace (high-precision and low-precision), ResQ applies a random orthogonal rotation to suppress outliers. This approach aims to minimize overall quantization error, maintain near-baseline performance at 4-bit, and deliver competitive speedups over 16-bit inference. The authors integrate their projection matrices into the model architecture for minimal runtime overhead and benchmark ResQ extensively on multiple LLMs (Llama series, Qwen2.5, and Qwen2-VL) across language modeling, reasoning, and multimodal tasks. Results show perplexity and speedup gains with minimal additional calibration effort compared to other methods. Claims And Evidence: ResQ’s key proposition is that it can enable robust 4-bit quantization of weights, activations, and KV caches, largely closing the performance gap to 16-bit baselines. The authors support this by benchmarking on tasks such as Wikitext perplexity and MMLU zero-shot accuracy, showing clear improvements compared to established PTQ methods (e.g., GPTQ, SpinQuant). While the results strongly indicate that ResQ maintains accuracy in low-bit regimes, a gap remains in exploring full multi-GPU or large-batch deployment. Another core claim is that retaining a small low-rank subspace in higher precision is theoretically near-optimal. This is justified by a theoretical bound positing that the directions with largest eigenvalues dominate quantization error, and rotating activations in each subspace helps suppress outliers. However, it relies on assumptions of near-Gaussian distributions, which may not always hold under real-world activation behaviors. A third claim addresses ResQ’s runtime overhead, contending that by fusing projection matrices into existing layer weights, the impact on throughput is minimal. Indeed, single-block timing tests on an RTX 3090 GPU show up to 3× speedup over 16-bit baselines with only minor slowdowns relative to purely INT4 kernels. Yet, the paper does not provide extensive breakdowns of concurrency or other (or multi-) GPU scenarios, leaving some open questions about overhead at scale. Lastly, ResQ’s generalization to large and multimodal models is highlighted by successful application to 70B parameter Llama and Qwen2 VL, although overhead details for extremely large or specialized models are not examined in depth. Methods And Evaluation Criteria: ResQ fits within the broader family of mixed-precision post-training quantization methods. The authors evaluate primarily on: - Wikitext Perplexity: A common measure of pure language modeling fidelity. - 0-shot accuracy on standard reasoning tasks (ARC, BoolQ, etc.) and MMLU to test knowledge retention and general understanding. - Generative tasks (GSM8K for math, code completion, summarization) to assess the approach on auto-regressive generation. - MMMU for multimodal comprehension using Qwen2-VL. Such a diverse evaluation set is a strength: it shows that ResQ is robust across typical PTQ tasks (language modeling, reasoning, generation) as well as specialized tasks (multimodal). However, as with many PTQ papers, the chosen tasks mostly focus on correctness or perplexity rather than fine-grained analysis of speed–accuracy trade-offs under real-world concurrency or large-batch scenarios. Theoretical Claims: The authors prove an upper bound on quantization error under Gaussian assumptions, showing that the subspace-based approach is near-optimal in minimizing the Frobenius norm difference. Although typical in quantization research, the proof remains partly heuristic (due to normality assumptions), but this is consistent with contemporary PTQ literature. Experimental Designs Or Analyses: The authors primarily employ Wikitext perplexity and multiple 0-shot benchmarks—MMLU, ARC, BoolQ, HellaSwag, and more—to evaluate how well quantized models retain their reasoning or language modeling capabilities. They also include a set of generation tasks such as GSM8K math problems and code completion, offering a more comprehensive view of performance beyond classification or short-answer tasks. ResQ is compared against well-known baselines (e.g., GPTQ, SmoothQuant+, SpinQuant), and the consistent outperformance on perplexity and accuracy indicates that subspace-based 4-bit quantization indeed preserves essential model quality. The authors further examine a small but crucial set of ablations, for instance by removing or altering projection matrices for attention and feedforward blocks, and see noticeable drops in performance when these projections are omitted. Finally, while they do measure kernel-level speedups on a single GPU, there is comparatively limited exploration of multi-GPU scaling or diverse inference conditions. This narrower hardware focus, although still useful, suggests that additional profiling under larger-batch or distributed-serving scenarios would strengthen the paper’s overall claims regarding real-world usability. Supplementary Material: The supplementary content primarily consists of detailed derivations, extra ablations, extended tables, and model information. Relation To Broader Scientific Literature: ResQ extends a line of LLM PTQ research focusing on 4-bit quantization with minimal accuracy loss: - GPTQ introduced Hessian-based weight-only quantization. - SmoothQuant, OmniQuant addressed outlier channels through amplitude scaling. - QUIK, QuaRot, SpinQuant used outlier or rotation-based methods to handle activation extremes. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths** - Good empirical results: Achieves near-best perplexities, 0-shot accuracies, and generative outcomes at 4-bit. - Implementation details: The authors describe how to fuse the projection matrices into the model to reduce overhead. They also measure actual runtime speedups, reinforcing practicality. - Applicability: The method is tested on a broad set of LLMs (1B to 70B+ parameters) and even vision-language models, suggesting decent generalizability. **Weaknesses** - Limited multi-GPU or concurrency analysis: Real-world inference typically runs parallel requests or large batches. The paper focuses on single-GPU, small-batch speed tests. - Random rotation overhead: While somewhat mitigated by fusing, there remains an unquantified overhead in larger contexts or many decoding steps, especially for the UD projection in the FFN. - Minimal exploration of extremely large contexts: Since KV cache quantization is a key selling point, more exhaustive or real-time scenarios with 8K+ tokens might better showcase memory/time improvements. - Heuristic theory: The analysis of normal-distribution-based error bounds is fairly standard in modern quantization, but it might under- or overestimate the actual distribution complexities in LLM activations. Other Comments Or Suggestions: None Questions For Authors: - Have you tested how ResQ’s overhead scales in multi-GPU or distributed settings where activation projections might incur extra synchronization costs? Are the speedups consistent with single-GPU results? - For models that support context lengths beyond 8k tokens, how does ResQ perform in practice? Does the overhead of repeated rotation or subspace decomposition increase, or does it remain relatively stable? - You allow a flexible subspace dimension (e.g., d/8). Have you tested intermediate ranks or adaptive ranks per layer? How does the rank selection differ among different layers or for different activation distributions? - Could combining PCA-based subspace extraction with advanced codebook-based or cluster-based quantization (similar to AQLM) yield further gains? Or is the advantage of random rotation overshadowed by codebook overhead? - Since you observed that small calibration sets (128–512 samples) can suffice, do you see performance fluctuations if the calibration set is domain-specific vs. general Wikitext? Are there domain mismatches that degrade performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you, Reviewer UnBL, for your effort in reviewing our paper and acknowledging the strong empirical results and applicability of ResQ. We answer the listed questions below and incorporate your constructive feedback to further strengthen our work. 1. **Heuristic theory:** We agree with the reviewer that most quantization error analyses, including ResQ, assume normally distributed activations. In practice, LLM activations deviate from normality. However, in ResQ, the activation distribution becomes approximately Gaussian after projection via the orthogonal matrix $U$ (as shown in Lemma 4.1). To support this, we compute the kurtosis of activations before and after projection, shown in Table 1 below. Post-projection activations $XU_l$ and $XU_h$ exhibit kurtosis near 3, indicating Gaussianity. Thus, our theoretical assumptions hold in practice. Table 1: Kurtosis of Activations before and after projection. |Model|Layer|$X$|$XU_l$|$XU_h$| |---|:---:|:---:|:---:|:---:| |Qwen2.5-3B|Attn|91.9$\pm$38.8|3.0$\pm$0.005|2.9$\pm$0.07| ||MLP|179.8$\pm$248.3|3.0$\pm$0.004|2.9$\pm$0.07| |Qwen2.5-7B|Attn|75.6$\pm$60.5|3.0$\pm$0.002|3.0$\pm$0.02| ||MLP|164.7$\pm$243.1|3.0$\pm$0.0|2.9$\pm$0.04| |Meta-Llama-3-8B|Attn|37.9$\pm$50.3|3.0$\pm$0.0|3.0$\pm$0.02| ||MLP|6.6$\pm$1.4|3.0$\pm$0.0|3.0$\pm$0.02| 2. **Distributed Inference setting :** Please refer to Table 7 in comment to Reviewer vpAp. 3. **Long Context Inference:** We provide per decoder speedup (similar to Fig. 5 in paper) for longer context lengths upto 20000 on Qwen2.5-32B and Meta-Llama-3-70B models in Table 2 below. With increasing sequence lengths, the improvements of ResQ slightly reduces still achieving 2.33x (2.02x) for Qwen2.5-32B (Meta-Llama-3-70B) speedup at sequence length of 20k. Table 2: Per decoder speedup of long context inference on RTX 3090. |Model||||Seq\_len |||| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |Model|8k|10k|12k|14k|16k|18k|20k| |Qwen2.5-32B|2.76|2.68|2.58|2.50|2.43|2.38|2.33| |Meta-Llama-3-70B|2.36|2.21|2.17|2.12|2.07|2.04|2.02| 4. **Flexible Subspace Dimension:** It is possible to assign different ranks across layers for $U_B$ and $U_C$ projections since they are different for each layer. It is not possible to have different ranks for $U_A$ with the current approach since it is shared across layers. We test one approach of assigning different rank of 8-bit component to $U_B$ and $U_C$ based on the eigen value distribution of key and value. Specifically, for layers with higher eigen values, we keep 15.6\% channels in 8-bit while for the rest we keep 9.3\% channels in 8-bit achieving 12.5\% high precision components in average. Results with such an approach (shown in Table 3 below) show that flexible rank improves the performance on reasoning accuracy but performs worse on MMLU demanding important future exploration. Table 3: Comparison of ResQ with a variant of ResQ having flexible rank of $U_B, U_C$. |Model|Method|Wiki.PPL|Avg.Reasoning acc.|Avg.MMLU acc.| |:---:|:---:|:---:|:---:|:---:| |Llama-3.2-3B|ResQ|8.8|59.0|**49.8**| ||ResQ-flex.rank|**8.7**|**59.2**|49.0| |Meta-Llama-3-8B|ResQ|7.1|63.9|**57.2**| ||ResQ-flex.rank|**7.0**|**64.3**|56.8| |Qwen2.5-3B|ResQ|**9.0**|61.1|61.2| ||ResQ-flex.rank|**9.0**|**61.9**|**61.3**| |Qwen2.5-7B|ResQ|8.2|65.3|**69.0**| ||ResQ-flex.rank|**8.0**|**65.4**|68.6| Task-wise results can be found [here](https://shorturl.at/KTiZR) in Table 1. 5. **Codebook-based quant:** Codebook-based non-linear quantization methods (e.g., AQLM mentioned by the reviewer) focus solely on weight quantization, whereas ResQ takes a more holistic approach by quantizing weights, activations, and KV cache. To the best of our knowledge, no existing work explores non-linear quantization of LLM activations. This is an interesting direction, and we leave it for future work. Thank you for bringing AQLM to our attention; we will cite the work. 6. **Calibration Dataset:** To analyse the impact of calibration datasets we evaluate performance of ResQ on 3 different datasets. These include two out of distribution language modeling datasets C4 and PTB and one instruction tuning dataset Alpaca. The results provided in Table 4 below shows no significant performance fluctuations with different calibration datasets. Table 4: Performance of ResQ with different calibration datasets. |Model|Calib.dataset|Wiki.PPL|Avg.Reasoning acc.|Avg.MMLU acc.| |:---:|:---:|:---:|:---:|:---:| |Llama-3.2-3B|Wikitext|**8.8**|59.0|**49.8**| ||C4|**8.8**|**61.7**|48.6| ||PTB|**8.8**|59.1|47.6| ||Alpaca|**8.8**|58.9|48.0| |Meta-Llama-3-8B|Wikitext|**7.1**|63.9|57.2| ||C4|**7.1**|**64.0**|57.2| ||PTB|**7.1**|63.9|56.3| ||Alpaca|**7.1**|63.9|**57.5**| |Qwen2.5-3B|Wikitext|**9.0**|61.1|**61.2**| ||C4|**9.0**|60.7|59.6| ||PTB|9.1|59.7|59.5| ||Alpaca|**9.0**|61.3|60.9| |Qwen2.5-7B|Wikitext|8.2|65.3|**69.0**| ||C4|8.2|65.7|68.5| ||PTB|**8.0**|65.3|68.8| ||Alpaca|8.9|65.7|68.4| Task-wise results can be found [here](https://shorturl.at/KTiZR) in Table 2.
null
null
null
null
null
null
Decision Mixer: Integrating Long-term and Local Dependencies via Dynamic Token Selection for Decision-Making
Accept (poster)
Summary: This paper proposes adapting the attention block in transformers based on the Mixture of Experts (MoE) design to effectively balance capturing long-term dependencies and extracting local features. By adapting it to the Decision Transformer architecture, extensive experiments demonstrate its superior performance in the areas of offline reinforcement learning. Claims And Evidence: Extensive experiments on standard benchmark datasets empirically demonstrate its superior performance compared to various offline RL methods, including both value-based and CSM-based approaches. Additionally, experiments highlight its computational efficiency over existing CSM methods, suggesting its potential to support scalable research in offline RL. However, the claim of theoretical consideration is unreasonable due to the absence of any formal theoretical analysis. Methods And Evaluation Criteria: The module design is generally reasonable for its intended purpose. However, I have two concerns: 1. The inconsistency of training and inference. During training, the selection of tokens passed through the attention block is determined by the hypernetwork, which considers the entire input sequence—an approach that is not feasible during inference. To address this, the authors introduce an auxiliary predictor to approximate the hypernetwork’s decision. However, this predictor operates on an incomplete sequence, missing crucial information available to the hypernetwork. This raises doubts about its ability to make accurate predictions. A more consistent approach might involve ensuring the hypernetwork also processes a causal sequence by masking future steps during training. 2. The novelty and contribution seem to be somewhat incremental. Many studies have explored the combination of MoE and transformers to improve computational efficiency. While adapting this to the Decision Transformer is a reasonable extension, the paper's novelty and impact are limited due to insufficient discussion and comparison from a broader perspective beyond its specific application area. Theoretical Claims: Although the authors claimed theoretical consideration in Introduction, there is no theoretical analysis in this paper. Experimental Designs Or Analyses: The experiments are thorough and well-executed. The ablation study and computational complexity analyses effectively validate the strength of their design. Supplementary Material: I briefly reviewed the additional experimental analyses, which appear to be informative and supportive. Relation To Broader Scientific Literature: Scaling offline RL is both a compelling and significant topic. However, there has been many works exploring combing MoE and transformer to enhance the computation efficiency. Although adapting it to the Decision Transformer is reasonable, the novelty and impact of this paper are limited due to the lack of broader discussion and comparative analysis beyond a specific scenario. Essential References Not Discussed: Several previous works exploring combing MoE and transformer to enhance the computation efficiency: Fedus, William, Barret Zoph, and Noam Shazeer. "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity." Journal of Machine Learning Research 23.120 (2022): 1-39. Dai, Damai, et al. "Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models." arXiv preprint arXiv:2401.06066 (2024). Csordás, Róbert, et al. "Moeut: Mixture-of-experts universal transformers." arXiv preprint arXiv:2405.16039 (2024). Lepikhin, Dmitry, et al. "Gshard: Scaling giant models with conditional computation and automatic sharding." arXiv preprint arXiv:2006.16668 (2020). Other Strengths And Weaknesses: The paper is well-written, providing clear explanations and precise notations. Other Comments Or Suggestions: No other comments. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the careful review of our work. Due to the strict word limit, we have tried to address the reviewers' comments carefully. All additional experiments will be incorporated into the text. **Theoretical Claims** We provided a brief analysis of the CSM method from the perspective of re-weighting in equation (2). Given the word limit and the need for academic rigor, we have removed the statement "theoretical analysis" from the main text. **Concern 1: The inconsistency of training and inference** We carefully considered using a more consistent approach to achieve our objectives. We designed three schemes: (1) The hypernetwork uses a single token as input for training and inference without needing an auxiliary predictor. (2) The hypernetwork uses a causal sequence as input for training and inference without needing an auxiliary predictor. (3) The hypernetwork uses the entire sequence as input for training, providing data for the binary classification training of the auxiliary predictor, which is then used for inference (adopted by DM). ||(1)|(2)|(3) -|-|-|- halfcheetah-medium|27.5 $\pm$ 5.1 (48.0h)|40.9 $\pm$ 0.6 (20.5h)|43.5 $\pm$ 0.7 (7.5h) hopper-medium|82.7 $\pm$ 18.0 (48.0h)|94.7 $\pm$ 5.0 (20.5h)|98.1 $\pm$ 3.6 (7.5h) walker2d-medium|55.2 $\pm$ 4.9 (48.0h)| 83.4 $\pm$ 2.7 (20.5h)|83.8 $\pm$ 0.8 (7.5h) maze2d-umaze|59.1 $\pm$ 4.7 (24.0h)|83.2 $\pm$ 5.8 (15.0h)|86.9 $\pm$ 1.9 (6.9h) antmaze-umaze|80.3 $\pm$ 12.4 (24.0h)|75.0 $\pm$ 9.1 (16.0h)|100.0 $\pm$ 0.5 (6.5h) (3) demonstrated the best performance and stability across all tasks, with significantly shorter training times. Although (1) and (2) ensured consistency between training and inference, they struggled with convergence due to insufficient utilization of global information from the training data. (3) adopts a task decomposition approach to implement training and inference hierarchically, with the auxiliary predictor trained based on the prediction data from each round of the hypernetwork and router. Figure 1 of the pdf (https://anonymous.4open.science/r/Decision-Mixer-1068/rebuttal.pdf) shows the training loss curves, which indirectly suggest that binary classification of data on a specific Mixer layer for a given task is relatively easy to learn. Experimental results in Table 1 and the ablation study in Table 2 confirm that potential distribution shifts were addressed through synchronized training. **Concern 2: The novelty and contribution seem to be incremental** All prior works have tended to introduce expert routing mechanisms in FFN or attention layers without involving token selection, where a fixed number of experts handle different tokens in the complete sequence. We emphasize that DM differs fundamentally from existing works in component design and training approach. DM innovatively designs (1) a dynamic token selection mechanism to address sequence modeling conflicts specific to offline RL, differing from conventional static MoE. DM handles incomplete sequences and uses generalized residual connections after each layer to ensure the consistency of output and input lengths. (2) During inference, we also designed a unique auxiliary predictor from the task decomposition perspective to address inconsistencies between training and inference. (3) We deploy the selection before the attention layer, making DM a more flexible plug-and-play architecture. It is more cost-efficient than conventional combinations of MoE and transformers, providing a reasonable direction for exploring scaling laws under the DT architecture. **Essential References Not Discussed** We will discuss the combination of MoE and transformers in the text. GShard[1] introduced MoE into transformers to address load imbalance via routing and expert capacity constraints. Switch Transformers[2] proposed a Top-1 gating mechanism to reduce computation and communication overhead. DeepSeekMoE[3] optimized expert utilization with fine-grained segmentation and expert isolation. MoEUT[4] combined MoE with the Universal Transformer, addressing the parameter-computation efficiency trade-off. Additionally, several works[5,6,7] explored MoE integration in visual tasks from the perspectives of data processing[5], multi-task learning[6], pre-training[7], and global training[8]. We discuss MoE because DM can be viewed as using a single expert to filter tokens before the standard transformer layer. However, this does not mean we have merely made a simple transplant. The collaboration between the router and the hypernetwork in DM, along with the auxiliary predictor and generalized residual connections, are seamlessly integrated, forming an efficient and practical framework that paves the way for future paradigms in offline RL. [1] Gshard... [2] Switch transformers... [3] Deepseekmoe... [4] Moeut... [5] Scaling vision with sparse mixture... [6] Adamv-moe: Adaptive multi-task vision... [7] Moe jetpack: From dense checkpoints... [8] Mod-squad: Designing mixtures... --- Rebuttal Comment 1.1: Comment: The rebuttal has satisfactorily addressed my concern regarding the inconsistency. I recommend that the authors incorporate the relevant discussion and supporting experiments into the main paper, as this will significantly enhance its quality. Additionally, the discussion on prior work combining MoE and Transformers is valuable and should also be included. However, I still find the novelty to be somewhat limited. Taking these factors into account, I have decided to raise my score to a weak accept. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for kindly raising the score. We will incorporate all relevant discussions and supporting experiments suggested by the reviewer into the main paper. We are also grateful for the reviewer's thoughts on the innovation of our work, which have prompted us to further reflect on and rearticulate DM's unique contributions in terms of novelty. We strive to do our utmost to address reviewers' concerns, not only about the paper's rebuttal but also in providing meaningful insights for the RL community. - DM approaches the inherent feature trade-off problem in the DT architecture from a novel perspective of dynamic token selection, enabling a systematic exploration that significantly improves computational efficiency while ensuring robust performance gains. - Unlike previous MoE+Transformer approaches that focus on statically expanding the number of experts and assigning tokens to experts, DM simplifies the design by discarding the concept of "experts." It adopts a single-router structure and performs token selection through a tightly coupled mechanism between a hypernetwork and an auxiliary predictor. - The router, hypernetwork, and auxiliary predictor in DM are all simple in structure, tightly integrated, and highly modular, making them easy to plug into existing architectures to enhance performance. - DM is the first to explore potential scaling laws in conditional sequence modeling (CSM). In contrast to prior closed-source studies that focus solely on increasing parameter scale, we have released all our code to support reproducibility and further research by the community. We also explored integrating existing MoE+Transformer methods into the DT framework. Specifically, we focused on two approaches: MoE with token choice routing [1] + DT (referred to as Method 1) and MoE with expert choice routing [2] + DT (referred to as Method 2). Given the architectural differences between DM and existing MoE methods, we have made every effort to ensure fairness by limiting the number of experts to 8 and the top-k value to 2 in both Method 1 and Method 2. Other hyperparameters (e.g., model depth, batch size) were kept broadly consistent with Table 6 in the appendix, with minor adjustments to maximize performance. All experiments were conducted in the Gym environment, and results were averaged over three random seeds. ||1|2|DM -|-|-|- halfcheetah-medium|23.9|20.1|43.5 hopper-medium|54.9|72.1|98.1 walker2d-medium|64.3|40.3|83.8 maze2d-umaze|49.5|77.6|86.9 The performance of MoE+Transformer methods is inferior to that of DM, which is consistent with our intuition. Method 1 and Method 2 rely on static mechanisms to assign tokens to specific experts within the FFN layers, resulting in lower flexibility than DM. Moreover, the absence of a precise token selection mechanism before the standard transformer layers makes it difficult to support trajectory stitching, which limits the effectiveness of off-the-shelf MoE+Transformer methods in RL tasks. Additionally, MoE architectures are notoriously difficult to tune, and when data quality or quantity is insufficient, they are prone to sudden performance degradation. We will further elaborate on the uniqueness and contributions of our approach in the main paper. [1] Gshard: Scaling giant models with conditional computation and automatic sharding. [2] Mixture-of-Experts with Expert Choice Routing.
Summary: This paper introduces Decision Mixer (DM), a Transformer-based architecture for offline reinforcement learning. DM features a dynamic token selection mechanism, where a routing module learns to selectively attend to relevant past tokens during training. To enable efficient inference, an auxiliary predictor is trained concurrently to approximate token importance without access to future information. Experimental results across a diverse set of offline RL benchmarks—including standard locomotion tasks and MetaWorld—demonstrate that DM consistently outperforms existing approaches in both performance and efficiency. ## update after rebuttal Considering the authors' response and other reviews, I have changed my score to weak accept. Claims And Evidence: Although the motivations and solutions seem to be novel, there are a number of unclear factors in the paper that required clarification: I find it unclear how the router R and the hypernetwork H were trained in the proposed approach. Specifically, the training process for these components is not well-explained in the manuscript. What is the architecture of these networks (not included in the manuscript) and whether the training process is only based on the downstream loss function in Eq (6)? Additionally, I am curious about how the auxiliary predictor is trained. More details are needed on what constitutes the ground truth for training this predictor and how its predictions are evaluated during the learning process. Methods And Evaluation Criteria: The benchmarks that were for evaluation in this paper are diverse and suitable. These are benchmarks that have been used in previous DT related research. Theoretical Claims: There are no theoretical proofs in this paper. Experimental Designs Or Analyses: Experiments design have been checked, and they are reasonably formulated. The paper includes standard evaluation on Mujoco benchmarks, in addition, they also include evaluation on MetaWorld environments. However, the paper lacks evaluation on discrete-action-space environments (Atari games are often being used in previous DT research). Supplementary Material: Supplementary Material has been reviewed. The architecture of router R and hypernetwork H seems to be missing from the appendix. Relation To Broader Scientific Literature: - The paper introduces Decision Mixer (DM), a low-complexity architecture aimed at balancing local and long-range dependencies in Decision Transformer (DT) models through a layer-wise token selection mechanism. Inspired by the Mixture-of-Experts (MoE) architecture, the authors propose a router network to select the tokens. - While this method is novel in its application to offline reinforcement learning, token selection and dropping strategies have been explored in prior Transformer research. For example, Token Dropping for Efficient BERT Pretraining (arXiv:2203.13240) proposes dropping less important tokens mid-layer to improve training speed without sacrificing performance. Similarly, Random-LTD: Random and Layerwise Token Dropping (arXiv:2211.11586) presents a technique that skips computation for random subsets of tokens at intermediate layers to reduce cost. - The authors should more thoroughly engage with the existing literature on token dropping in Transformers, including a comparison and contrast to clearly articulate the novelty of their approach. Additionally, further empirical evidence is needed to convincingly demonstrate the superiority of the proposed method in reinforcement learning settings. Essential References Not Discussed: Missing baseline: The paper “Long-Short Decision Transformer: Bridging Global and Local Dependencies for Generalized Decision-Making” acknowledges the same problem and proposes solution that should be compared here. Other Strengths And Weaknesses: The paper presents a well-motivated problem, and the proposed solution seems to be novel in RL settting, particularly in its ability to dynamically select token weights. Empirical results across multiple tasks and environments demonstrate performance improvements when applied to various base models, such as DT, QDT, and ODT. Other Comments Or Suggestions: - There is a mention of "MOD" in line 343, but I was unable to find what it refers to. - For Figure 2, consider adding axis titles or specifying in the caption what the attention scores represent to improve clarity. For example, in DC, the relevant discussion section explicitly mentions that the attention scores reflect relationships between token level (state, action, and returns-to-go). Providing similar context here would make the figure more interpretable. - The term “threshold k” is somewhat misleading, especially when the output of hypernetwork is denoted as \textit{k}. Changing top-k to top-\texit{k} might help reader in understanding that this refers to the same value. Questions For Authors: - Can you explain more about the results in Table 2. as to why incorporating such auxiliary predictor improves the overall performance compared to just dynamically token selection? - I believe it is necessary to include a clock-time measurement in addition to the complexity measurement, as the method involves the additional training of multiple networks. This would provide a more comprehensive assessment of the method's efficiency and resource requirements. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the detailed review of our work. Given the strict word limit, we have carefully addressed the reviewer's comments and will incorporate the additional experiments and suggestions into the updated version. Anonymous pdf: https://anonymous.4open.science/r/Decision-Mixer-1068/rebuttal.pdf **The details of $R$ and $H$** We found that both $R$ and $H$ can perform well using simple MLP, as shown below. $R$ and $H$ are incorporated as part of the main model, with no additional constraints added, and the training is based solely on equation (6). Additional thoughts can be found in Q1 for Reviewer ekS2. Network|Layer|Input|Output -|-|-|- $R$|Linear|embed_dim|1 $H$|Linear|context_length×embed_dim|512 ||LeakyReLU ||Linear|512|1 $θ_{aux}$|Linear|embed_dim|embed_dim//2 ||SiLU ||Linear|embed_dim//2|2 The auxiliary predictor $\theta^l_{aux}$ is trained with gradient isolation using dynamic selection results from $R$ and $H$. The main model selects the top-$k$ tokens from $X^l$ and generates binary labels $z_i \in {0,1}$ as ground truth for $\theta^l_{aux}$. The predictor outputs binary logits $\hat{y}_i$, optimized via equation (5) to match $\sigma(\hat{y}_i)$ with $z_i$. We found the token selection distribution in binary classification easy to learn, with rapid loss convergence shown in Figure 1 of the anonymous pdf. Main experiments and ablation studies confirm prediction accuracy. Alternatives are discussed in our response to reviewer EJaz on "Inconsistency of training and inference." **Lacks evaluation on specific environments** We have added experimental results for DM in the Atari, reporting the average performance across three random seeds. Table 1 of the anonymous pdf shows that DM performs well in discrete action environments, demonstrating the method's generalizability. This success was expected due to the precedents set by DT and DC. **More engage with the existing literature** We will provide a more detailed discussion in the main text. Token dropping, initially proposed to reduce BERT inference costs [1,2], was adapted by Hou et al. [3] to improve training efficiency. Random-LTD [4] advanced this with random-layer token dropping and learning rate scheduling. While prior work focuses on static efficiency strategies for vision and language tasks [5,6], they often lack dynamic adaptation, risking semantic disruption. In contrast, DM dynamically selects tokens using a router and hypernetwork, aligning with offline RL’s Markovian nature for better performance. Its plug-and-play design and synchronized training offer greater flexibility than existing methods. [1] Train short, test long... [2] SpAtten: Efficient Sparse... [3] Token dropping for... [4] Random-ltd... [5] Revisiting token dropping... [6] Multi-Stage Vision Token... **Further empirical evidence** In the paper, we have included baseline comparisons in RL scenarios, component ablation studies, token selection statistics and visualizations, computational efficiency, training curves, generalization, portability, and context length experiments. Following reviewer feedback, we added DM’s Atari results and adapted Random-LTD to DT in Gym. Table 2 of the anonymous pdf shows that Random-LTD underperforms DM in RL tasks, likely due to its reliance on random token dropping. We will attempt more transplant comparison experiments in future work. **Essential References Not Discussed** We will include LSDT in the related work and experimental comparison sections. LSDT integrates DT's self-attention and DC's dynamic convolution via branch design for decision-making. DM extends these environments and outperforms LSDT in most experiments (Table 3 of the anonymous pdf). **Other Comments** - We apologize for the typo and will change "MOD" to "model". - We have placed the updated Figure 2 in the anonymous PDF. - We will replace the relevant terms with \texit{k}. **Q1: Why does an auxiliary predictor improve performance** The auxiliary predictor addresses issues from inconsistent input data formats during training and inference. $H$ uses the complete sequence to predict $k$, but during inference, the autoregressive nature of the transformer makes the following sequence invisible, preventing sequence-level predictions. The token-level auxiliary predictor enables filtering without full sequence information. $R$ and $H$ jointly serve as teacher models, providing binary classification training data for the auxiliary predictor. This hierarchical training approach reduces the difficulty of training and fully utilizes all available data. **Q2: Include a clock-time measurement** We measured the clock time from training start to convergence across multiple tasks. The results in Table 4 (anonymous pdf) show that DM has a smaller time overhead than DT and is competitive with DC. Despite adding networks, DM's minimal parameters and dynamic token selection mechanism shorten the sequence length, avoiding significant clock time increases.
Summary: This paper introduces Decision Mixer (DM), a select-concatenate-compute mechanism that improves efficiency in offline reinforcement learning. Inspired by MoE, DM dynamically filters key tokens for attention while retaining information from unselected ones. It also integrates an auxiliary predictor to mitigate short-sightedness. Experiments demonstrate that DM outperforms existing methods while significantly reducing computational overhead. ## update after rebuttal I will keep my score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable for the real-world problem. Theoretical Claims: Yes, the theoretical claims and proofs have been examined. The auxiliary predictor in Section 3.3, formulated with Equation (5), provides a valid approach to handle autoregressive sampling constraints. Experimental Designs Or Analyses: The experimental design is robust, with comprehensive benchmarking across diverse D4RL domains and thorough ablation studies to dissect the contributions of DM's components. A powerful aspect is the visualization of the token selection mechanism (Figure 4), which provides intuitive insights into how DM adapts to different tasks, such as the positional proximity of selected tokens in standard Markov tasks versus their discrete distribution in non-standard Markov tasks. This visualization aligns with the theoretical motivation behind DM and offers a clear, interpretable understanding of its behavior across varying task complexities. Supplementary Material: I reviewed the feasibility of the code provided in the submitted supplementary materials. Relation To Broader Scientific Literature: Previous work, whether DT or DC, had a certain degree of data bias. DM introduces a dynamic token selection mechanism that dynamically balances local and long-term dependencies. MoE inspires this mechanism and leverages a router and hypernetwork to select tokens for attention computation. This innovation allows DM to adaptively focus on relevant features, addressing DT's limitations and improving task performance with standard and non-standard Markov properties. Essential References Not Discussed: No, all essential related works have been cited discussed. Other Strengths And Weaknesses: Strengths: The paper creatively combines ideas from CSM, MoE, and conditional computation to design DM. The dynamic token selection mechanism, inspired by MoE, is a novel and innovative approach to balancing local and long-term dependencies in offline RL. By significantly reducing FLOPs and memory usage compared to DT, DM addresses a key practical challenge in scaling RL models, making it more feasible for resource-constrained environments. Weaknesses: The paper does not explicitly discuss the limitations of DM, such as potential failure cases or scenarios where it may underperform compared to baseline methods. A more balanced discussion would strengthen the paper. Other Comments Or Suggestions: It is recommended to adjust some instances of "MOE" to "MoE" in the main text to ensure consistency and accuracy in terminology. Questions For Authors: 1. The existing loss functions mainly focus on the MSE loss for action prediction. Considering the stability of the training process, should additional regularization terms, such as a routing weight smoothing term, be introduced to prevent abrupt changes in token selection between adjacent tokens? 2. Is the dynamic range of k generated by the hypernetwork constrained? When the input sequence contains significant noise, can the hypernetwork generate extreme values (e.g., k=0 or k = S), causing the model to degrade into a purely convolutional or attention-based architecture? 3. Could the paper provide a more balanced discussion by highlighting DM's limitations, such as scenarios where it may underperform, to guide future work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the careful review of our work. Due to the strict word limit, we have tried to address the reviewers' comments carefully. All additional experiments and suggestions will be incorporated into the updated text. **It is recommended to adjust "MOE" to "MoE"** We apologize for this mistake. To ensure consistency and accuracy of terminology, we will change all occurrences of "MOE" to "MoE" in the main text. **Q1: Should regularization terms be introduced** We appreciate the reviewer's thoughtful insights. To ensure our algorithm's simplicity and ease of deployment, we have solely used Equation (6) for the entire model training. Introducing additional regularization terms, such as a routing weight smoothing term, can improve training stability. To investigate this, we conducted experiments by adding a regularization term with a weight of 0.1 on top of Equation (6). The specific details of the regularization are as follows: $$L_{smooth}=\lambda \sum_{i=1}^{S-1}\|w_i-w_{i+1}\|^2$$ The experimental results are presented in the table below. We refer to DM with the added regularization term as DM_s. ||DM|DM_s -|-|- halfcheetah-medium|43.5 $\pm$ 0.7|33.1 $\pm$ 0.2| hopper-medium|98.1 $\pm$ 3.6|90.9 $\pm$ 1.0| walker2d-medium|83.8 $\pm$ 0.8|80.1 $\pm$ 0.1| maze2d-umaze|86.9 $\pm$ 1.9|86.4 $\pm$ 0.8| antmaze-umaze|100.0 $\pm$ 0.5|48.0 $\pm$ 0.8| We found that while DM_s exhibited improved stability, it experienced varying degrees of performance degradation across all tasks compared to DM. We hypothesize that this is primarily due to the introduction of the weight smoothing term, which constrains the flexibility of token selection based on task-specific characteristics, making the selection process overly conservative. Abrupt changes in token selection across neighboring positions can be reasonable, as they allow the model to flexibly concatenate trajectories based on data characteristics while preserving the Markov properties required for specific tasks. In future work, we will explore more refined approaches to achieving efficient token selection more smoothly. **Q2: Is the dynamic range of k constrained** The range of values for $k$ is unrestricted to ensure the algorithm's simplicity. Depending on the nature of the task, $k$ can approach either 0 or the entire sequence length $S$. The medium-replay dataset (high noise), medium dataset (moderate noise), and medium-expert dataset (low noise) from specific tasks serve as references for evaluating the robustness of our model's selection, as shown in Table 8 and Figure 6. For the hopper-medium task, the average number of selected tokens fluctuates slightly across datasets of different qualities. A similar pattern is observed in the other two tasks, with the overall average number of selected tokens remaining between 20 and 30, approximately half of the entire sequence length, without significant degradation. To further investigate the hypernetwork’s predictions in highly noisy environments, we introduced Gaussian noise sampled from a standard normal distribution to all tokens in the hopper-medium-replay and halfcheetah-medium-replay datasets while keeping the original action labels unchanged. We refer to these modified datasets as hopper-medium-noise and halfcheetah-medium-noise. The results show that the hypernetwork outputs a higher $k$ value for hopper-medium-noise, whereas the opposite trend is observed for halfcheetah-medium-noise. This suggests that the number of selected tokens adapts to task-specific characteristics in highly noisy environments. No significant performance degradation was observed during the experiments, indicating the robustness and stability of our approach. ||1th Mixer Layer| 2nd Mixer Layer|3rd Mixer Layer|Average -|-|-|-|- hopper-medium-replay|40.59|33.60|15.76|29.98 hopper-medium-noise|45.94|37.66|29.83|37.81 halfcheetah-medium-replay|15.07|31.99|37.29|28.12 halfcheetah-medium-noise|10.04|25.37|7.99|14.47 **W1/Q3: A more balanced discussion would strengthen the paper** We have summarized DM's advantages and limitations to help readers understand our approach better. By dynamically selecting and concatenating important tokens at each layer, DM reduces computational complexity while effectively balancing the trade-off between capturing long-term dependencies and extracting local Markov features. This approach enhances efficiency and offers valuable insights into scaling laws for offline RL. Notably, the dynamic token selection during inference relies on the auxiliary predictor for online decision-making, which may introduce latency in scenarios with strict real-time requirements. Additionally, DM performs slightly worse than value-based methods on low-quality or noisy data. Future work will explore data augmentation or more efficient and robust token selection strategies—such as adversarial training or noise-adaptive mechanisms—to improve adaptability to noisy or low-quality data.
Summary: The main contribution is a novel dynamic token selection mechanism termed Decision Mixer (DM), inspired by MoE to enhance CSM for offline reinforcement learning. DM adaptively selects key tokens for attention computation while preserving information from unselected tokens via feature concatenation, improving efficiency and mitigating information loss. Additionally, an auxiliary predictor in the autoregressive sampling process enhances long-term decision-making. Experiments show that DM achieves SOTA performance with reduced computational cost. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria, including the benchmark datasets, are well-aligned with the problem and application at hand. Theoretical Claims: The factorization in Equation (2) correctly follows from the conditional probability decomposition, ensuring a proper reweighting mechanism based on future returns. The formulation aligns with standard importance sampling principles, making it a valid approach. The theoretical justification for this reweighting perspective is well-grounded. Experimental Designs Or Analyses: The study presents a well-structured evaluation of DM across multiple D4RL domains, supported by detailed ablation studies that effectively highlight the contributions of its core components. One notable strength is the computational complexity analysis (Table 3), demonstrating DM's efficiency in reducing memory usage and FLOPs compared to baseline methods like DT and DC. This analysis underscores DM's practical advantages and aligns with the broader goal of developing scalable and resource-efficient offline RL methods, and exploring how DM's efficiency scales with larger datasets or more complex environments would provide further insights into its applicability in real-world settings. Supplementary Material: Yes, I primarily reviewed the DM architecture design in the code within the supplementary material. Relation To Broader Scientific Literature: Scaling law in offline reinforcement learning has not been explored well, especially regarding how CSM methods based on the Transformer architecture can maximize the performance advantages of Transformers, which is a fascinating question. While some previous works have scaled up the architecture and parameters of DT, they have done so at the cost of a proportional increase in computational resources. In contrast, DM proposes a feasible solution for computational efficiency while maintaining performance. [1] Lee K H, Nachum O, Yang M S, et al. Multi-game decision transformers. Advances in Neural Information Processing Systems, 2022. [2] Reed S, Zolna K, Parisotto E, et al. A generalist agent. arXiv preprint arXiv:2205.06175. Essential References Not Discussed: All essential related works that provide the necessary context for understanding the key contributions of the paper have been cited and discussed. Other Strengths And Weaknesses: Strengths: + The paper tackles underexplored challenges in offline RL, such as handling non-standard Markov properties and improving generalization in suboptimal trajectories. These contributions are original and fill important gaps in the literature. DM's ability to handle standard and non-standard Markov tasks broadly applies to various RL benchmarks, including Gym, Adroit, Kitchen, AntMaze, and Maze2D. + The paper is well-organized, with clear explanations of the motivation, methodology, and results. The use of visualizations enhances understanding of the model's behavior. Weaknesses: - While the dynamic token selection mechanism can adaptively adjust computational load, it may introduce selection bias in certain long-sequence or high-noise tasks (e.g., complex maze tasks with sparse rewards) due to fluctuations in router weights caused by local noise. Therefore, the robustness of the dynamic mechanism in extremely sparse scenarios still needs to be enhanced. Other Comments Or Suggestions: The authors could consider providing a more detailed explanation of this aspect, as the autoregressive nature of sampling may cause the input distribution to gradually deviate from the training data. Since the auxiliary predictor relies on token selection labels generated during training, could this distribution shift potentially affect the quality of the generated sequences? Questions For Authors: 1. The training of the auxiliary predictor relies on token selection labels generated during training. However, the model generates sequences autoregressively during actual sampling, which may slightly cause the input distribution to deviate from the training data. In this case, could the auxiliary predictor make incorrect selections due to distribution shift, thereby affecting the quality of the generated sequences? 2. How does the "dynamic token selection mechanism" mentioned in the paper reduce computational overhead? Specifically, how does it lower computational complexity by reducing the number of tokens entering the attention layer? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for the careful review of our work. Due to the strict word limit, we have tried to address the reviewers' comments carefully. These additional experiments and suggestions will be incorporated into the updated main text. **W1: The robustness of the dynamic mechanism in sparse scenarios needs to be enhanced** We fully agree that there is still potential for improvement. An intuitive approach is to adopt data augmentation. Considering that complex tasks exhibit significant noise variations mainly at the environmental state level, we introduce Gaussian noise perturbation to all state dimensions in the training data, formulated as: $$\tilde{s}_j=s_j+\alpha \cdot \sigma_j \cdot \epsilon_j.$$ Here, $\alpha$ is a controllable noise intensity, which we set to 0.05. $\sigma_j$ represents the standard deviation of the $j$-th dimension, which has been computed during data preprocessing. $\epsilon \sim \mathcal{N}(0, I)$ denotes a noise vector sampled from the standard normal distribution. We generate two random seeds to add noise to the initial data $\tau$, obtaining two noisy datasets, $\hat{\tau_1}$ and $\hat{\tau_2}$. Each training iteration uses a total of $3 \times$ batch data. $\tau$, $\hat{\tau_1}$, and $\hat{\tau_2}$ share identical values except for the perturbed states. The processed data is then used for training and referred to as DM_enhanced. The experimental results on three tasks are as follows: ||DM|DM_enhanced |-|-|-| hopper-medium|98.1 $\pm$ 3.6|98.0 $\pm$ 1.9 maze2d-umaze| 86.9 $\pm$ 1.9|88.9 $\pm$ 1.3 antmaze-umaze|100.0 $\pm$ 0.5|100.0 $\pm$ 0.1 DM_enhanced performs better than DM on the maze2d-umaze task, with a minor standard deviation. Although DM_enhanced does not perform better on the reward-dense task hopper-medium, a similar stability improvement is observed. This phenomenon suggests that the data augmentation employed in DM_enhanced enhances its stability and robustness. Due to rebuttal time constraints, exploring more instructive designs for robustness suppression will be part of our future work. **Q1: Could the auxiliary predictor make incorrect selections due to distribution shift** The only potential distribution shift is the difference between the training and inference inputs. We designed three schemes: (1) The hypernetwork uses a single token as input for training and inference without needing an auxiliary predictor. (2) The hypernetwork uses a causal sequence as input for training and inference without needing an auxiliary predictor. (3) The hypernetwork uses the entire sequence as input for training, providing data for the binary classification training of the auxiliary predictor, which is then used for inference (the approach adopted by DM). ||(1)|(2)|(3)| -|-|-|- |halfcheetah-medium|27.5 $\pm$ 5.1|40.9 $\pm$ 0.6|43.5 $\pm$ 0.7| |hopper-medium|82.7 $\pm$ 18.0|94.7 $\pm$ 5.0|98.1 $\pm$ 3.6| |walker2d-medium|55.2 $\pm$ 4.9|83.4 $\pm$ 2.7|83.8 $\pm$ 0.8| |maze2d-umaze|59.1 $\pm$ 4.7|83.2 $\pm$ 5.8|86.9 $\pm$ 1.9| |antmaze-umaze|80.3 $\pm$ 12.4|75.0 $\pm$ 9.1|100.0 $\pm$ 0.5| |average score|61.0|75.4|82.5| Scheme (3) demonstrated the best performance and stability across all tasks, with significantly shorter training times. Although schemes (1) and (2) ensured consistency between training and inference, they struggled with convergence due to insufficient utilization of global information from the training data. Scheme (3) adopts a task decomposition approach to implement training and inference hierarchically, with the auxiliary predictor trained based on the prediction data from each round of the hypernetwork and router. Figure 1 of the anonymous pdf shows the training loss curves, which indirectly suggest that binary classification of data on a specific Mixer layer for a given task is relatively easy to learn. Experimental results in Table 1 confirm that potential distribution shifts were addressed through synchronized training. **Q2: How does the "dynamic token selection mechanism" reduce computational overhead** In a specific Mixer Layer $l$, the dynamic token selection mechanism reduces computational complexity by adaptively selecting the top-$k$ tokens (via router $R^l$ and hypernetwork $H^l$) for attention processing while skipping others. When the sequence length input to the attention layer is reduced from $S$ to $k$, the attention complexity decreases from $O(S^2d)$ to $O(k^2d)$, and the computational complexity of the FFN layer reduces from $O(Sd)$ to $O(kd)$, where $d$ is the hidden dimension. For example, in the Key/Value projection stage, if $k = S/2$, the computation is reduced to 25% of the original FLOPs. Experiments show that DM reduces FLOPs by 47.0% compared to DT and achieves better clock time performance, as shown in the table below. ||DT|DC|DM |-|-|-|-| halfcheetah-medium|$\approx$ 12.0h|$\approx$ 8.0h|$\approx$ 7.5h hopper-medium|$\approx$ 10.0h|$\approx$ 6.5h|$\approx$ 7.5h antmaze-umaze|$\approx$ 10.0h|$\approx$ 6.0h|$\approx$ 6.5h
null
null
null
null
null
null
Learning Likelihood-Free Reference Priors
Accept (poster)
Summary: This paper proposes the learning of reference priors via simulation-based approaches and normalizing flows. I am very short on time for ICML reviews. Apologies for my reviews being a bit short. Claims And Evidence: They claim that they can learn reference priors via simulations. Indeed, we do see evidence that this can be done in a few simple and two more complex examples. It is a bit confusing to me that the approach is not able to learn the reference prior for two of the simple models well enough that the KS test already detects differences for small sample sizes. Did you try to learn reference priors on an unconstraint scale (e.g., log SD)? Methods And Evaluation Criteria: They seem appropriate. Theoretical Claims: Only basic theory of reference priors is used which for my understanding is correctly interpreted and applied Experimental Designs Or Analyses: Experiments are reasonable but only very low dimensional. What would happen if the normalizing flow would attempt to learn a flat reference prior over the reals, i.e. for the mean of a Gaussian model? Would that work reasonable at all? Supplementary Material: None present I think. Relation To Broader Scientific Literature: This paper fits well into the literature on reference priors. Essential References Not Discussed: Outside of reference priors, other approaches exist what use simulations to learn priors, partially even with normalizing flows. E.g., see https://arxiv.org/abs/2411.15826 , https://arxiv.org/abs/2308.11672 , https://arxiv.org/abs/2410.08710 perhaps these lines of literature should be cited too? Other Strengths And Weaknesses: see questions Other Comments Or Suggestions: nothing major Questions For Authors: One issue I have is that, since reference priors are typically very wide, they are likely to create trouble when applied in an SBI setting to estimate the posterior. This is because, for parameters in the tails of a reference priors, simulated data is often so unreasonable that learning the posterior in the space of realistic data becomes quite hard. Can the authors comment on how applicable they see their reference priors (or reference priors more generally) in actual SBI settings beyond the few examples they provided in the paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their useful comments. We have included a set of new experiments, described at the end of this review. # Reviewer Comments 1. On the KS test performance: The KS tests for the Exponential and Gaussian models do reveal discrepancies. Both reference priors have asymptotes at $\theta=0$. Our failure to fully capture the shape of the reference prior results from having to constrain the output of the variational prior to a subset of the real line. Currently, the final layer of the normalising flows we use is a sigmoid layer, and it must therefore be the case that the flow's density $\to 0$ at the boundaries of the region (since it cannot assign mass to infinitely low values in logit space), meaning the current network design cannot capture an asymptote accurately. This can passing a KS test hard. Yet, aside from the asymptote, the learned and true priors are similar over the majority of the support (see updated Fig. 2 in pdf below). Note that this problem would persist even when modelling $\log(\sigma)$: either we'd leave the space unconstrained, in which case we'd fail to fully capture the uniform, improper prior on the unconstrained space, or we'd constrain to a bounded domain through a sigmoid layer again, leading to the same problem as above. We will discuss this in the revision, along with the use of alternative variational families that can better capture asymptotes. 2. On learning priors in unconstrained spaces and learning flat priors over the reals: Please see previous bullet point, which we hope addresses this. 3. On experiments being low dimensional: To demonstrate our methods' abilities to perform well for higher-dimensional parameters, we include in the pdf below (Fig. 1) new experiments on the SLCP model (description below, and further discussion in our reply to Reviewer V6ZV), which is higher-dimensional. Both methods recover expected priors; GED is less successful since it does posterior estimation, and SLCP often has complex posteriors. We'll discuss this in the revision. 4. On the additional references: Thank you for these suggestions. These are relevant to the problem of determining prior distributions but focus on eliciting expert priors (arguably the opposite of our problem). 5. On applications & practical implications of reference priors in SBI: It is possible for certain regions of parameter spaces to produce unreasonable/nonsensical outputs from simulators, and that reference priors might assign significant mass to such regions. However, the modeller can still restrict attention to regions of the parameter space $\Theta$ that do not produce absurd behaviours even when using our methods; we simply address the problem of how to learn a reference prior on a parameter space once it is given (which may be a subset of $\Theta$). We provide examples in the new experiments (Figures 2 and 3 of accompanying pdf) on doing SBI using the learned reference priors (and posteriors), and we can further point to our answer to Question 2 for reviewer RgwU, where we provide additional discussion on the use of reference priors for SBI. # New experiments We have included a set of new experiments here, https://github.com/ICML-7582/rebuttal_plots/blob/main/plots.pdf, including: - Fig. 1: Experiments on a new simulator (SLCP-D [1]) that possesses a higher dimensional parameter space. The SLCP-D model has five parameters which parameterise the mean $\mu(\theta)$ and covariance $\Sigma(\theta)$ of a 2D Gaussian. The output of SLCP-D consists of four iid samples from said Gaussian and 48 distraction vectors sampled from a mixture of Student's $t$-distributions that are independent of the parameters. SLCP-D has a relatively **s**imple **l**ikelihood that produces **c**omplex **p**osteriors. The true reference prior should be uniform in $\mu(\theta)$, and for the determinant $|\Sigma(\theta)|$ should decay rapidly in $|\Sigma(\theta)|$. VLB & GED recover this behaviour. This also highlights that our methods are approximately invariant to reparameterisation: we learn reference priors for $\theta$ but recover the correct behaviour of $\mu (\theta)$ and $|\Sigma(\theta)|$. VLB outperforms GED for SLCP-D; intuitively this is because GED approximates a (complex) posterior during training, while VLB methods do not. - Figs. 2 & 3: SBI experiments using reference priors from InfoNCE (a VLB method) & GED. Fig. 2: NRE posterior vs. ratio estimator trained during the VLB reference prior training; Fig. 3: NPE posterior vs. posterior density estimator trained during GED training. Inferences are almost identical, demonstrating our point about being able to perform SBI with no additional cost. - Fig. 4: New plots for SIR model outputs when only the infection data is used to find reference prior. - Fig. 5: An updated version of Figure 2 in the paper. - Fig. 6: An updated plot of the KS test. [1] Lueckmann et al., "Benchmarking Simulation-Based Inference", AISTATS 2021.
Summary: This paper focuses on learning an objective prior in the context of likelihood-free inference (SBI), where the likelihood function is intractable. Within the mutual information (MI) estimation framework, the authors propose three methods: GED, InfoNCE, and SMILE. These methods are systematically compared through various toy examples in simulation studies and further applied to more complex models, including the g-and-k model and an SIR model. Claims And Evidence: This work is novel, as it may be the first attempt to obtain reference priors in the context of simulation-based inference. The paper reframes the problem of finding reference priors as a mutual information (MI) estimation task and explores solutions using various machine learning techniques. The presented simulation studies are particularly interesting, making this article an enjoyable read. Methods And Evaluation Criteria: 1. Inconsistency among the three methods From a practical perspective, it is unclear which of the three methods should be preferred. According to the experiments in Section 4, the GED method appears to be the most favorable. However, in Section 4.2.1, GED fails to produce a flat prior for the first parameter, $a$ whereas the other methods do. Additionally, for the parameter $g$, the support varies significantly among the methods, leading to minimal overlap in the joint distributions of GED and InfoNCE (Figure 4(d)). This discrepancy could result in entirely different Bayesian posterior inferences. Given that the paper aims to establish an objective prior as the author illustrates in Introduction Section, these inconsistencies undermine the article’s central goal. 2. Methodological questions on GED In Equation (12), obtaining the MI estimator requires estimating the posterior distribution \( \pi_{x_{1:n}} \), but this step is not straightforward. I believe the accuracy of posterior estimation plays a crucial role in determining the performance of the optimization in Equation (12). Even with a parameterized prior \( \pi_\phi \), obtaining the exact posterior distribution remains challenging. Modern ML-based SBI methods \cite{papamakarios2016fast, greenberg2019automatic, lueckmann2021benchmarking} and references therein rarely achieve exact posterior estimates in many cases. Specifically, in lines 196–198, the authors state: ``we propose defining \( \pi \) and \( \hat{\pi}_{x_{1:n}} \) to be of the same parameterized model class, and use \( \hat{I}^{\phi, \pi}(\mathcal{D}_\phi) \) (12) as an estimate of MI.” However, this explanation remains somewhat unclear. I believe this aspect should be elaborated further in the main text to clarify the underlying methodology. 3. Simulation studies The overall structure of the simulation studies could be improved for better clarity and organization. Many challenges in SBI arise from high-dimensional parameter spaces, as seen in models described in Section 4.2.1. Since multi-parameter models are a common concern among practitioners, extending the analysis beyond Section 4.1.1 to include such cases is strongly recommended, even if only proper priors are considered. In these cases, using classification two-sample tests (C2ST) or sliced Wasserstein distance, instead of KS statistics, would enhance consistency with the SBI literature. This would provide a more systematic comparison among the methods. 3. Minor comments and questions Additional derivations for clarity: The article states that Equation (2) implies (3) and that \( I_\pi(x_{1:n},\theta) \) is equivalent to Equations (9) and (10). Providing explicit derivations for these statements would improve clarity for the reader. Comparison with standard SBI methods (Section 3.3): This section demonstrates the ability to perform SBI at no additional cost using the learned reference prior. However, how does the posterior distribution obtained through this approach compare to those from standard SBI methods? Theoretical Claims: There are not many theoretical results in this paper. Experimental Designs Or Analyses: 1. Choice of Test Metrics: The authors used the Kolmogorov–Smirnov two-sample test (KS test) for quantitative comparisons between the learned priors and the ground truth reference priors. The KS test is less common and may not be ideal for assessing differences in higher-dimensional distributions or distributions with pronounced tails or boundary behaviors. Adopt metrics more established in SBI literature, such as classification two-sample tests (C2ST) or sliced Wasserstein distances, which can better handle high-dimensional distributions and provide clearer insights into distributional differences relevant to SBI. 2. Dimension and Complexity of Simulation Studies: The paper primarily evaluates methods on relatively simple, low-dimensional examples. While the authors briefly consider multi-parameter models (e.g., g-and-k and SIR models), the depth of analysis for these higher-dimensional cases is somewhat limited. Given practitioners' frequent need to handle higher-dimensional parameter spaces, it would greatly strengthen the paper to explicitly include additional high-dimensional simulation studies. Even if restricted to proper priors, this would better highlight practical applicability. 3. Stability and Consistency of Results: There are notable inconsistencies among the GED, InfoNCE, and SMILE methods across different experiments (e.g., Section 4.2.1). While authors briefly acknowledge this, the practical implications for choosing one method over others are not adequately explored. Provide explicit guidance or decision rules (possibly via additional experiments) on selecting among methods based on specific contexts or performance indicators. Comparison with Standard SBI Methods: The paper highlights that the proposed approach facilitates SBI at no extra cost but does not quantitatively compare these posterior distributions to those obtained by existing standard SBI methods. Incorporate explicit quantitative comparisons of posterior inference accuracy or consistency with established SBI benchmarks to demonstrate added value or trade-offs. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper's key contributions lie at the intersection of simulation-based inference (SBI), objective Bayesian methods, and machine learning-based mutual information estimation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. We have included a set of new experiments [here](https://github.com/ICML-7582/rebuttal_plots/blob/main/plots.pdf), and see reply to Reviewer UJGD for a detailed explanation. ## Methods and Evaluation Criteria - On inconsistencies between methods: It is reasonable that there may be inconsistencies between methods since there may be multiple local optima that give approximately the same mutual information between the model parameters and output but correspond to different priors. We will use some of the extra space to discuss that a sensible approach in practice may be an "ensembling" approach, namely to perform multiple runs of the prior learning procedure to account for any such variation. Such ensembling approaches are common in SBI to account for variations of this kind, see e.g. [1, 2] below. - On choosing between methods: Please see our discussion on this question in our response to Reviewer RgwU ('Question 3'). We will add this discussion to the final version. - On the performance of GED given that it learns a posterior as well as a prior: Indeed, estimating the posterior is not straightforward for some simulators. This is a possible disadvantage of GED. However, the method has produced reference priors in almost all cases matching either the ground truth or the other approaches. Other approaches might be preferable depending on problem specifications. We believe our methods (the first to address approximating reference priors in likelihood-free settings) constitute a useful baseline upon which future work can likely improve. - On lines 196--198: We'll use some of the extra space to give further details on the underlying methodology: we'll move the algorithm from Appendix A.6 into the main body, and specify that taking the prior and posterior estimators to be of the same model class is not necessary (the main reason for this was to reduce tuning and method complexity). - On higher-dimensional parameter spaces: Please see discussion on SLCP below. - On metrics: We have now used these metrics and present them in the tables below (format: _mean (standard dev)_ from 5 repeats). We'll use some of the extra space to discuss in the main body that we perform generally better but often similarly to/a bit worse than a likelihood-based method ("Berger") from [3]. Wasserstein: | **Task** | **Berger** | **InfoNCE** | **SMILE** | **GED** | | -------- | -------- | -------- | -------- | -------- | | **Gaussian** | 6.95 (0.64) | 7.08 (0.56) | 6.77 (0.53) | 1.66 (0.20) | | **Exponential** | 4.25 (0.39) | 2.43 (0.61) | 3.83 (0.75) | 2.04 (0.32) | | **AR(1)** | 0.14 (0.01) | 0.06 (0.02) | 0.05 (0.03) | 0.06 (0.02) | | **Triangular** | 0.08 (0.01) | 0.04 (0.00) | 0.05 (0.01) | 0.02 (0.01) | C2ST: | **Task** | **Berger** | **InfoNCE** | **SMILE** | **GED** | | -------- | -------- | -------- | -------- | -------- | | **Gaussian** | 0.90 (0.01) | 0.62 (0.00) | 0.63 (0.01) | 0.49 (0.01) | | **Exponential** | 0.91 (0.01) | 0.58 (0.04) | 0.61 (0.02) | 0.59 (0.06) | | **AR(1)** | 0.62 (0.01) | 0.50 (0.02) | 0.50 (0.01) | 0.51 (0.01) | | **Triangular** | 0.64 (0.01) | 0.54 (0.02) | 0.55 (0.02) | 0.51 (0.02) | - On Equations 2, 3, 9, and 10: Thank you; we'll derive these relationships in the appendix for clarity. - On performing SBI: We include a demonstration of how SBI can be performed at no additional cost using our methods, and a comparison against the results of running NPE and NRE in Figs. 2 & 3 of the rebuttal pdf. We'll use the extra space to include these results in the revised paper. ## Experimental Designs or Analyses 1. Please see comment above on the new test metrics. For further comments and discussion on the use of the KS test, please see reply 1. to Reviewer UJGD below. 2. We have included new experimental results for models with a higher number of parameters (SLCP, for description see reply to UJGD). The true reference prior for the parameters should be uniform in $\mu(\theta)$ whilst the prior density for the determinant $|\Sigma(\theta)|$ should decay rapidly in $|\Sigma(\theta)|$. The priors learned by VLB and GED recover this behaviour. This highlights that our methods are approximately invariant to reparameterisation, since we learn reference priors in terms of $\theta$ but recover the correct reference prior in terms of $\mu (\theta)$ & $\Sigma(\theta)$. VLB methods outperform GED; GED approximates a (complex) posterior during training, while VLB methods don't. More generally, the tradeoffs between VLB and GED are analogous to those between NRE and NPE. 3. Please see our responses above on how to decide between/combine methods. [1] Cannon et al., "Investigating the impact of model misspecification in neural simulation-based inference", arXiv preprint (2022) [2] Elsemüller et al., "Sensitivity-aware amortized Bayesian inference", TMLR (2024) [3] Berger et al., "The Formal Definition of Reference Priors", The Annals of Statistics (2009) --- Rebuttal Comment 1.1: Comment: Thank you for the response. I appreciate the clarification, but I believe the concern remains only partially addressed, and in some cases, the method does not appear to perform reliably, as detailed below. If multiple local optima lead to priors with substantially different supports—as seen in the g-and-k example—then describing the result as “objective” becomes less convincing. As noted in the Introduction (right column, lines 39–44), the aim is to minimize the modeller’s prior influence and derive posteriors driven primarily by the likelihood. However, if the learned prior is highly sensitive to the optimization method or the method itself, this objective appears difficult to achieve in practice. While an ensemble approach may offer a practical solution, it doesn’t fully resolve the conceptual issue. A more formal justification—either theoretical or empirical—would strengthen the argument for objectivity in this framework. I appreciate the response regarding how to choose between methods. However, this remains unclear in practice even given the response on this concern. For instance, in the case of the g-and-k distribution, which method would be preferred? Similarly, what is the recommended approach for the SIR model? Without clear guidance or criteria, it's difficult to know how to choose between methods ahead of time. A key concern with InfoNCE and SMILE is their empirical performance. As shown in the simulation studies using C2ST, GED closely approximates the exact posterior, while InfoNCE and SMILE fall short—even in simple, one-dimensional settings. If these methods struggle in such basic scenarios, it raises concerns about their robustness in higher-dimensional problems. The paper would be strengthened by including C2ST results for the SLCP case (if ppssible), and by extending the evaluation to higher-dimensional examples. One straightforward approach could be to increase the dimensionality of the toy models in Section 4.1.1, where the true prior is still accessible. As another reviewer also noted, addressing performance in high-dimensional settings would significantly enhance the clarity and impact of the empirical results. A key concern with GED is its heavy reliance on the quality of the posterior approximation, as the authors have acknowledged. In toy examples, obtaining accurate posteriors is relatively feasible using techniques from simulation-based inference. However, in more complex settings, accurate posterior estimates are often difficult to obtain. This raises concerns not only about the objectivity of the resulting prior, but also about the method’s practical applicability—especially evident in the g-and-k distribution case. This sensitivity might also explain why InfoNCE outperforms GED on the SLCP task in the updated experiments, despite the opposite trend in the toy examples. As noted in \cite{lueckmann2021benchmarking}, SLCP tends to perform poorly when only $10^4$ simulations are used for the learning, which could be a contributing factor. While I recognize the potential novelty of the idea presented in the manuscript, I believe that significant revisions are necessary to bring it in line with the standards expected for publication. As such, I feel it is appropriate to maintain my original assessment score. --- Reply to Comment 1.1.1: Comment: Thank you for the detailed comment, we reply to each paragraph in your reply below. ### Para. 2 The reviewer is concerned that the lack of a unique way to minimise the influence of the prior on the posterior means that approaches to doing so are not “objective”. This is a problem the reviewer has with the name “objective”, which is not our own terminology. The term “objective priors” has been historically adopted for methods giving rise to "minimally informative" priors, according to different definitions of “minimally informative”. We use this term to align with existing literature, but will discuss in our revision why “objective” is a problematic term (see, e.g., [4]). The reviewer is also concerned that, even under the formal notion of “objectivity” in Eq. 2, our approaches are not “objective” because they may result in different priors. Again, this is a problem the reviewer has with pre-existing terminology. The specific notion of “minimally informative” we consider is the mutual information (MI) between $x$ & $\theta$. This is convex in prior $\pi \in \Pi$ for fixed conditional $p_{\theta}$ (i.e., fixed simulator). Thus a unique global optimum in $\Pi$ is not guaranteed to exist and multiple ways to be “minimally informative” can exist under our operational definition of objectivity. Finding different ways to solve Eq. 2 is therefore useful for conducting a complete prior sensitivity analysis, and the fact that such solutions can substantially differ from each other isn't problematic in the way the reviewer states. As shown above and below, different metrics can be computed to check prior quality. Our revision will discuss all of this, and replace $=$ with $\in$ in Eq. 2. ### Para. 3 We already give guidelines for choosing between methods (mirroring those for NPE vs NRE) in our rebuttal that we'll integrate into the revision. We'll also discuss VLB methods' stronger theoretical guarantees, which may lead practitioners to favor them over GED (e.g., InfoNCE provides a valid lower bound on MI, yielding a natural measure of "objectivity"). Even with guidelines, expecting to know ahead of time which method works best for a specific problem is unrealistic. As in any ML task, the optimal method depends on the task's details, which are often impossible to specify exactly. This is recognised in [5] (penultimate paragraph of Sec. 4), which the reviewer cites. Instead, one can test each method and compare the quality of each prior. This can be done by, e.g., using a C2ST: if a learned prior is a good reference prior (RP), it should induce high MI between $x$ & $\theta$, making it easy to classify $(x,\theta)$ pairs drawn jointly vs. pairs drawn from the product of the marginals. Higher accuracies are therefore better in this classification task. The table below gives classification accuracies for different priors for SLCP-D (_mean_ (_std_) from 5 repeats); VLB & GED methods produce RPs with high classification accuracies, and both are better RPs than a uniform prior (a typical prior for SLCP-D experiments in the SBI literature). | **Uniform** | **InfoNCE** | **SMILE** | **GED** | | -------- | -------- | -------- | -------- | |0.55 (0.01) | 0.99 (0.00) | 0.98 (0.00) | 0.69 (0.10) | ### Para. 4 Our methods -- the first likelihood-free (LF) approaches for learning RPs for arbitrary simulation models -- consistently outperform a gold standard likelihood-based approach from [3]. It's therefore difficult to see how they can be fairly described as "falling short". The order of complexity of tasks is: likelihood-based approaches to estimating RPs < LF approaches to learning RPs < LF learning of high-dimensional RPs. Our methods perform well relative to the baseline from [3] across a range of tasks in that second step, which has _no prior literature_. Thus our methods already offer a substantial improvement over the current state of the art. Further, we show above via C2STs that our learned RPs for SLCP are good, indicating high MI between $x$ and $\theta$ (the purpose of RPs) and evidencing our methods' efficacy in high dimensional settings. GED does worse comparatively, consistent with our discussion on whether to use VLB or GED for models with complex posteriors. ### Para. 5 As demonstrated, GED has learned a good RP for SLCP _despite_ the known difficulties of performing posterior estimation for this model. The fact that it relies on a posterior approximation may therefore not be as limiting as it first seems. In any case, we have already discussed that VLB methods may be preferable in cases where posteriors are difficult to estimate, and we consider this insight (accompanied by the experimental results presented) a valuable contribution of our work. ### Refs [4] Irony et al., "Non-informative priors do not exist", Journal of Statistical Planning & Inference 1997 [5] Lueckmann et al., "Benchmarking Simulation-Based Inference", AISTATS 2021
Summary: The paper proposes a way to approximate a reference prior for a Bayesian analysis from a flexible family of priors in the SBI (simulation based inference) context, where the likelihood is intractable. The primary contribution here is the SBI context, in which various estimators of entropy are required to be specified. The paper uses methods adapted from those in the literature. The simulation results show that, in principle, this approach has some credibility. Claims And Evidence: Overall the principle of the method seems credible. However, there are a number of rough edges on the results, described below. Comments on Experiments (Section 4). It's good to see the initial focus on the known univariate reference prior examples. However the "accurate approximations of the ground truth reference priors." is slightly generous. The overall shape is there. However, in Fig 2: * Panel (a,b) there is clearly some density estimated to exist at theta=100, which definitely should not be there. That is, there seems to be some kind of undesirable "edge effect". * Panel (a,b) why is theta limited to 100. What happens if we consider theta larger than this? Does the method require the parameter space to be compact? How does the user know where to truncate it, if it needs to be truncated? * Panel (c,d) there is some truncation on the y axis here. While I appreciate keeping the focus on the detail, it would be good to see just how off the approximation is on the boundaries instead of hiding it. Fig 4: * Panels (a-b) are claimed to have higher diversity than Panel c. Not sure about the argument being presented here. The diversity of various quantities depends on the parameterisation of the statistic being shown. Diversity in one statistic can correspond to non-diversity in a different function of the same statistic. E.g. here the top plots (statistic I) seem more diverse under the uniform prior. Though I'm sure one could compute a different statistic and have the opposite conclusion. So I'm not sure what these panels really demonstrate. * Panel d: - It seems quite generous to interpret the a marginal prior as uniform, under any method. Also, there seem to be finite bounds on this (unbounded) parameter. What are they? - Its hard to judge if the marginal for b is even close to the 1/b rate as everything is so tiny. Please add the 1/b line to the plot so we can judge this more clearly. Similarly include a line on the marginals for g and k that indicate where the Gaussian case is (in which a 1/b rate might be expected). That is, the text claims that "we might expect the reference prior to decay as approximately 1/b", which may be true for a Gaussian, but is likely untrue for distributions with large skew and kurtosis. - There are large differences among the methods. Which are we to believe? The range of differences here could be quite influential on any posterior inference. - Similarly, what are we to make of the vastly different joint distributions? Some pairwise marginals exhibit very strong dependence, and others, very little. Fig 2 again: * As these models have tractable likelihoods, we can compare the outcome of the results of the current paper (the SBI focus) against other papers that have approximated reference priors with tractable likelihoods. Which of the features that are presented here are SBI-method effects, and which are not? Methods And Evaluation Criteria: Because no comparisons with other (tractable likelihood) methods are performed, the authors come up with some odd ways of justifying that the results are good/credible. * The KS tests of differences between the actual and estimated reference priors are only really a way to compare the relative performance of the different methods, in that one can pass or fail any test based on ones choice of number of samples. But then statements like (p.6. last para) "all of our proposed methods score consistently low [test statistics], avoiding the red regions where the null hypothesis is rejected." seem slightly misleading, as it seems fairly clear that if the number of samples is increased (they are mysteriously truncated at the low low value of 200 in Figure 3), then the KS statistics will fairly quickly go into the rejection region. Similarly in the discussion (p.8) the claim "in many cases [estimated priors] being indistinguishable from the ground truth according to standard two-sample tests" seems to be pushing credibility past acceptable bounds. * As discussed in Claims and Evidence, it's not clear what point the agent-based-model example is providing. One can generate "diverse" or "non-diverse" statistics simply by choice of statistic. Theoretical Claims: No theoretical claims made. It's all empirical. Experimental Designs Or Analyses: See other sections. Supplementary Material: Lightly skimmed. Relation To Broader Scientific Literature: The theoretical and methodological links to work like Nalisnick & Smith (2017) (Section 5) needed closer attention, as the contribution in this paper is purely the SBI setup. Essential References Not Discussed: . Other Strengths And Weaknesses: The strengths are that the use of reference priors is something that more researchers and analysts should be using, and methods and techniques that empower them to do so, particularly in the era of SBI, are highly valuable. The weaknesses are perhaps slightly not fully convincing simulations, and a lack of comparison to tractable-likelihood approaches to emphasis the performance of the SBI aspect more clearly. Other Comments Or Suggestions: Typos etc. Abstract "uninformative reference priors". All priors are informative in some way. The authors even cite Bernardo (1997)'s paper entitled "Non-informative priors do not exist". p.2 col 1 l.-7 "missing information to be gained" Should "missing" be "expected"? It doesn't make sense otherwise. p.2. and elsewhere. "the Jeffreys prior" -> "Jeffreys' prior". No "the", note the punctuation. It's the prior of Mr Jeffreys: Jeffreys' prior. p.2. Discussion of Jeffreys' prior as "a further motivation" for continuous priors. It's not clear what the narrative link to these priors is, nor why it's a further motivation. p.3 col 1 "a (lower bound on a )n estimate". Please write better sentences that don't play such tricks. p.6 col 2. Is "VLB" defined anywhere? p.7 equation (18). The value 0.8 here is actually a user specified parameter (typically labelled "c"). This is how mis-understandings propagate in the literature. p.7 last para "Early work on reference priors HAS". Also "Mote"->"Monte" Questions For Authors: Questions are in "Claims and Evidence" and everywhere else (why are there so many sections in this review form?), and also: * p.4, col 1, paras 2 and 3 notes that the prior and the posterior estimator are assumed to be of the same model class (so that both can be obtained by a different choice of parameters), such as a normalising flow. This is proposed for computational efficiency, but there is no discussion of the potential weaknesses or negative implications of this if, for example, the model class is unable to approximate these distributions well. p.3 col 1. Section 3 para 2. "Continuous approximations to reference priors are advantageous for a number of reasons" we are told "as discussed in Section 2.1". Section 2.1 mentions the avoidance of a technical issue (second last paragraph), and an unclear link to Jeffrey's prior. So the merit of this claim is unclear. Ethical Review Concerns: . Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. We have included a set of new experiments [here](https://github.com/ICML-7582/rebuttal_plots/blob/main/plots.pdf), and see reply to Reviewer UJGD for a detailed explanation. ## Figure 2 - The bumps in panels a) and b) were an artefact of the network architecture. Sigmoid layers in normalizing flows seem to induce unexpected tail behaviours in the resulting densities. Having experimented further, they have now disappeared, see Fig. 5 in the rebuttal pdf. - The stopping point of $\theta = 100$ was largely arbitrary, but not going higher can be justified by the fact that the prior falls off as a power law for panels a) and b). - Neither the reference prior theoretical foundations nor the VLB or GED methods require compact parameter sets. It happens to be that most simulators chosen require parameters to be in some compact set. - The question of where to truncate the prior for a potentially unbounded parameter space is important, but is a general problem when specifying priors and not specific to our methods. We instead address the problem of "given a parameter space, how could a minimally informative (proper, continuous, positive) prior be constructed?" - The y axes in panels c) and d) must necessarily be truncated due to the presence of asymptotes at the boundaries. We considered these plots to already adequately show that our methods have imperfections, such as InfoNCE and SMILE slightly overestimating densities close to the boundary in panel c), and GED exhibiting a slight left-skew in panel d). - Re. question on SBI effects: We now include a gold standard baseline for the reference priors in Fig. 2 of the accompanying pdf using the algorithm in [1] which uses knowledge of the tractable likelihood functions of these models. Plots and KS test statistics for these can be seen in Figure 6 in the rebuttal plots, and table of additional metrics requested by Reviewer V6ZV are given in our response to them below. In general we perform better, but in some cases similarly. ## Figure 4 - We agree that diversity in one statistic $\neq$ diversity in another. We chose $x$ to be curves of the proportions of individuals who are susceptible, infected, and recovered through time. A good reference prior should -- since the MI is a difference between the entropies of the marginal likelihood $H(x)$, and the likelihood function in expectation over the prior, $H(x\mid\theta)$ -- maximise $H(x)$ (encouraging diversity _in statistic_ $x$) while minimising $H(x\mid\theta)$ (discouraging diversity _in statistic_ $x$ from individual likelihood functions, on average). Panels a) and b) in Fig. 4 aim to show this. In the accompanying rebuttal plots we include similar plots obtained by finding reference priors from GED and VLB with $x$ as the _infection curve only_. In this case, the infection curves are much more diverse. - Panel d): We agree that the features we expect are approximately, but not perfectly, recovered from our prior learning methods. In the revision, we will use some of the extra space to discuss reasons why we believe our methods have struggled for this g-and-k simulator. Our other experiments, in addition to our new SLCP experiments (see Fig. 1 in the rebuttal pdf), do however demonstrate that our approaches can learn useful approximations to reference priors. ## Methods and Evaluation Criteria Please see our discussion of our comparison to the method from [1] and about diversity above. ## Relation to Broader Literature We'll use some of the extra space to give more details on Nalisnick & Smith (2017) & how we differ from this prior work. ## Other Strengths and Weaknesses We hope our new experiments that compare the performance of our methods to the gold standard tractable-likelihood approach from [1] (see above & response to Reviewer V6ZV) and demonstrate the ability of VLB & GED to perform SBI accurately at no extra cost (see Fig. 2 & 3 in accompanying pdf) address your concerns. ## Other Comments We'll fix the typos. We'll also: say "minimally informative", not "uninformative", in abstract; use "Jeffreys' prior"; write "optimise an estimate of, or a lower bound on, the MI..."; ensure "VLB" is defined on first use; and write $c$ in Eq. 18 and state that $c=0.8$ is a common choice that we also use. We'll explain the (missing) narrative link to Jeffreys' prior; the point of this, and Eq. 7 in particular, was to highlight that continuous, positive distributions satisfying Eq. 2 also (asymptotically & under regularity conditions) satisfy an alternative notion of "minimally informative" given in Eq. 7. Finally, we'll state these regularity conditions. ## Questions Our use of a prior and posterior estimator that are of the same model class is just for ease of hyperparameter tuning, and not a requirement of GED. On the comment re. continuous approximations: Please see "Other comments" above. [1] Berger et al., "The Formal Definition of Reference Priors", The Annals of Statistics (2009)
Summary: This paper addresses the problem of constructing reference priors for simulation-based inference (SBI). Unlike most SBI research, which focuses on posterior or likelihood estimation given a user-defined prior, this work tackles the challenge of developing "uninformative" or "reference" priors in a principled way when strong prior knowledge is unavailable or undesirable. The authors formalize the problem in the context of SBI, where the likelihood is intractable, and propose several approaches for learning reference priors using normalizing flows. These methods, adapted from the existing reference prior literature, are based on variational approximations and mutual information estimators. The paper demonstrates and compares these methods on several benchmark tasks, including toy examples with analytically tractable reference priors and more complex, intractable examples. The goal is to enable "objective" Bayesian inference in SBI, minimizing the influence of subjective prior beliefs on the posterior. ## Update After Rebuttal I thank the authors for their detailed rebuttal and the inclusion of additional experiments. This effectively addressed all my questions and concerns. With the promised changes incorporated, I believe the paper represents a valuable contribution to the field, consistent with my initial positive evaluation. Claims And Evidence: The primary claim – that reference priors can be learned in the SBI context using the proposed methods – is well-supported. The paper provides clear derivations of the relevant algorithms, adapting them from established reference prior literature. The experiments on various benchmark tasks demonstrate the feasibility of learning approximations to reference priors. The theoretical grounding is solid. A secondary, implicit claim is that learning reference priors with these methods enables standard SBI via subsequent posterior estimation or density-ratio estimation, followed by MCMC sampling. While this is logically sound (given the learned prior, which provides either a likelihood estimate or a posterior/ratio estimate), the paper lacks a comprehensive empirical demonstration of this end-to-end practical application. While the authors briefly mention a comparison to the uniform prior predictive in the SIR example, this is insufficient to fully showcase the utility. What's missing are experiments that systematically compare posterior inference (including both posterior distributions and predictive performance) using a learned reference prior versus a standard, hand-crafted prior (e.g., uniform or a domain-informed prior). These comparisons should highlight the practical advantages, if any, of using the learned reference prior in subsequent SBI tasks. Crucially, this demonstration should emphasize that once the reference prior is learned, it can be readily reused for standard SBI without requiring further training, which is a key potential benefit of the proposed approach. This expanded demonstration would significantly strengthen the practical justification for the methods. This is not a major flaw, but a missed opportunity. Methods And Evaluation Criteria: Yes, the experimental design is generally appropriate. Using toy examples with analytically tractable reference priors provides a crucial validation of the learned priors. The more complex examples add further evidence, although a more in-depth discussion of the choices made, or the use of a well-established, real-world SBI benchmark (e.g., from a field like neuroscience or astrophysics, where SBI is commonly applied), would increase the practical relevance and impact. Theoretical Claims: I did not check the proofs in detail, focusing on the conceptual soundness and experimental validation. Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analyses, and they appear sound in their current form (apart from the additional experiments and demonstrations mentioned below). Supplementary Material: No supplementary material was submitted. Relation To Broader Scientific Literature: The paper makes a valuable contribution by addressing a largely unexplored area within SBI: the principled construction of reference priors. While reference priors are well-established in Bayesian inference, their application to SBI, with its intractable likelihoods, is novel. This work connects to the ongoing discussion about the faithfulness of SBI methods (e.g., Hermans et al., 2023, concerning posterior calibration and overconfidence) by highlighting the often-overlooked role of the prior in the overall inference pipeline. It addresses the need for "objective" Bayesian inference in SBI, where minimal prior information is incorporated. Essential References Not Discussed: The paper provides a good discussion of prior work on learning reference priors. A connection to generalized Bayesian inference (or "post-Bayesian" inference) could strengthen the contextualization. This related field, with applications in SBI (e.g., Matsubara et al., 2021; Gao et al., 2023; Jävernpää et al., 2025), offers alternative approaches to handling uncertainty and model misspecification. While this paper focuses on a different approach (the prior), discussing the relationship and contrasting it with generalized Bayesian inference would provide a more complete view of the research landscape. Other Strengths And Weaknesses: ### Strengths - The paper is generally very well-written and clearly structured. - The problem and motivation are clearly articulated, and the necessary background is introduced thoroughly yet concisely. - The contributions are clearly stated. ### Weaknesses/Suggestions - Concept Figure: Figure 1 is quite minimal and sketch-like. A more prominent and informative figure, appearing earlier in the paper, would help convey the overall approach more effectively. - Practical Context for SBI: While the paper argues for the use of reference priors, it would benefit from a more thorough discussion of the practical context within SBI. The introduction mentions that the modeler might want to minimize the influence of their prior beliefs. However, in many practical SBI applications, practitioners do have prior constraints (e.g., bounds on interpretable simulator parameters). This difference between the "fully objective Bayesian" ideal and the typical SBI use-case should be explicitly addressed. Other Comments Or Suggestions: 1) line 198: "update update" Questions For Authors: 1) Proper vs. Improper Priors: On page 6, the paper mentions that the Exponential and Scale Gaussian models have improper reference priors. However, this distinction between proper and improper priors wasn't introduced earlier in the paper. Could you clarify this point and perhaps provide a brief discussion of the implications of using improper priors in this context? 2) Practical Application: Could you elaborate on how the proposed methods would be applied in a practical SBI scenario? For instance, consider a well-known SBI benchmark like the Hodgkin-Huxley model, which is often used with uniform priors. - How would one design and learn a reference prior in this case? - What are the expected computational costs compared to using a uniform prior? - How would a practitioner evaluate whether the learned reference prior is "good" or "sensible"? - What would be the conceptual and practical benefits for the practitioner? 3) Method Selection: The paper proposes several methods for learning the reference prior (e.g., VBEM, direct optimization, sequential learning). How should a practitioner choose among these options in a given application? Are there any heuristics or guidelines that can inform this choice, based on factors like the complexity of the simulator, the dimensionality of the parameter space, or the computational resources available? If no such heuristics currently exist, could you outline a potential research direction for developing them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. We have included a set of new experiments, see the reply to Reviewer UJGD for a detailed explanation. ## Questions 1. We will include explicit technical definitions for, and a discussion of, proper and improper priors. Briefly: an improper prior is an "unnormalisable" prior, i.e. one with infinite integral over the support, while a proper prior can be normalised to have integral 1; improper priors do not strictly correspond to probability measures for this reason, but can sometimes still produce proper Bayesian posterior distributions. 2. We will use some of the additional space to include a discussion on these 4 points at the end of our revised paper. Briefly: - The first of these points we discuss in bullet point 3. directly below this one. - Using a user-specified (e.g., Uniform) prior may be cheaper since it involves no learning of a prior; however, we are not recommending that our learned reference priors replace any subjective priors the modeller believes in, but rather that they complement such subjective priors by enabling a prior sensitivity analyses. Our methods allow modellers to demonstrate the degree to which their inferences differ from the case where "minimal" information (as measured by the MI between $\theta$ and $x$) is built in to the prior, and so we envision that our methods generate priors that are used _alongside_ the modeller's subjective prior. - To assess whether a reference prior is sensible (whether it serves the purpose it intends to serve) the modeller can estimate the MI between $\theta$ and $x$ using the different networks trained in the VLB and GED methods, or e.g. via sample-based entropy estimators [1]. However, note that estimating MI from finite data is a challenging problem (see [2], [3]). Alternatively, one can measure divergences between prior and posterior distributions as an estimate of the MI to assess these. - We outline what we believe is the main practical and conceptual benefit in the second bullet point above, and will further emphasise this in the revision. ### Question 3: Differences between VLB and GED We will include detailed heuristics in the paper. VLB methods maximise a variational lower bound for the MI which relies on learning critic networks to (essentially) construct density ratio estimates; GED methods construct density estimates from samples of the marginal and conditional distributions of $(\theta, x)$ and maximise directly the estimated MI. - GED methods do not require differentiability so they are preferred for problems with non-differentiable data collection, but (for equally computationally complex simulators) VLB methods are less computationally expensive than GED methods (in the sense that they only need to learn one density estimator instead of two). - GED methods rely on constructing an estimate for $\pi(\theta \mid x)$. In problems prone to yield very complex posterior distributions, this may hinder the efficacy of the method. In these, VLB methods might be preferred (when applicable), since this entails only one density estimation task on $\Theta$ (i.e., learning the prior), while GED involves two (i.e., both the prior and posterior). - When successfully learning a reference prior $\pi$ via GED, we also get (at no extra cost) an estimator for the posterior $\pi(\theta \mid x)$ under $\pi$, which we can directly use for e.g. SBI. We did this in the new rebuttal experiments (see Figures 2 and 3). ## Weaknesses 1. We will add detail to Figure 1 in the revision for clarity and place it earlier. 2. We agree that practitioners will often have prior constraints, and we support the incorporation of such beliefs into their Bayesian analysis. As we discuss above, we do not argue against the use of such subjective priors, but instead intend to provide practitioners with tools to assess how much information they have built into their prior, by equipping them with methods that let them find "minimally informative" priors. Our methods can of course also be used when no strong subjective prior beliefs exist, or in situations in which the practitioner prefers to minimise their own influence on the Bayesian analysis for whatever reason. ## General comments - **On the connection to GBI**: We’ll use some of the extra space to briefly discuss connections to GBI. - **Line 198**: Thank you, we'll correct this typo. ## References [1] Kraskov, Alexander, Harald Stögbauer, and Peter Grassberger. "Estimating mutual information." Physical Review E—Statistical, Nonlinear, and Soft Matter Physics 69.6 (2004): 066138. [2] Song, Jiaming, and Stefano Ermon. "Understanding the limitations of variational mutual information estimators." arXiv preprint arXiv:1910.06222 (2019). [3] McAllester, David, and Karl Stratos. "Formal limitations on the measurement of mutual information." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and additional experiments. All my concerns and questions have been addressed. With the inclusion of the additional results and promised changes, this paper will make a valuable contribution to the SBI literature, which is consistent with my initial positive evaluation. I look forward to seeing the final version. --- Reply to Comment 1.1.1: Comment: It's great to hear that the reviewer's concerns have been addressed and we thank them again for their helpful comments.
null
null
null
null
null
null
FedClean: A General Robust Label Noise Correction for Federated Learning
Accept (poster)
Summary: The paper introduces FedClean, a robust framework designed to address label noise in federated learning (FL) scenarios. FedClean employs a two-stage label correction approach to identify and rectify noisy labels from both local noisy label learning and global model perspectives. It also proposes a novel adaptive sample size-weighted aggregation (ASSA) method to mitigate the impact of label noise and enhance global model performance. FedClean does not assume the presence of clean clients or specific noise distributions, making it highly versatile. Key contributions include: (1) a two-stage label correction scheme with a collaborative per-sample loss to reduce false corrections, (2) the ASSA method to adjust client influence based on clean sample sizes, and (3) extensive experiments demonstrating that FedClean outperforms existing noise-label learning methods in FL, effectively handling label noise even when all clients are noisy. ## update after rebuttal After reading the authors' responses, I decide to keep my original score. Claims And Evidence: Yes, all claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, both proposed methods and evaluation criteria make sense for the problem and application. Theoretical Claims: Yes, both proofs for the theoretical claims are correct, with a sound overall structure and logical flow. The proofs employ standard techniques in Bayesian inference, maximum a posteriori estimation, and error analysis. The arguments for incorporating inferred labels to improve model accuracy and provide stability in noisy environments are well-supported by the mathematical framework. In Theorem 3.1, the use of Bayes' theorem to update model predictions with inferred labels as prior information is correctly applied, and the MAP estimation to refine predictions is consistent with Bayesian learning principles. The error analysis compares misclassification rates before and after incorporating inferred labels, showing a valid performance improvement. In Theorem 3.2, Bayes' theorem is used again to compute the posterior probability, correctly combining the model's prediction and the inferred label. The proof demonstrates that inferred labels provide a stable signal even in the presence of noisy annotations, enhancing the robustness of the model, and it effectively shows how inferred labels act as stabilizing factors, leading to more accurate final estimates. Experimental Designs Or Analyses: Yes, the experimental design in this paper appears solid, with a clear comparison to state-of-the-art methods across multiple settings, including both IID and non-IID distributions. The use of multiple benchmark datasets (CIFAR-10, CIFAR-100, and Clothing1M) with varying levels of label noise is appropriate for assessing robustness to noisy clients. Additionally, the inclusion of ablation studies (FedClean variants) allows for insight into the effectiveness of specific features. Supplementary Material: Yes, I have reviewed the supplementary material on the theoretical proofs and algorithm description. Relation To Broader Scientific Literature: FedClean's key contributions extend prior research in FL and noisy label correction, offering novel solutions to improve effectiveness and robustness. The paper highlights the importance of FL in privacy-sensitive domains, focusing on challenges from label noise and the limitations of centralized learning due to privacy concerns and data diversity. FedClean's noise correction framework contrasts with methods assuming clean clients or specific noise distributions, addressing real-world FL challenges. The two-stage label correction scheme and per-sample loss assessment method advance FL noise correction. The Adaptive Sample Size-Weighted Aggregation (ASSA) method adjusts client influence to reduce label noise and improve model performance, while zkCor enhances privacy. Experimental results validate FedClean’s superior performance, establishing it as a more effective solution compared to existing FL methods. Essential References Not Discussed: No, there are no essential references not discussed. Other Strengths And Weaknesses: Strengths: 1. Originality and Novelty: The paper's approach is quite original, particularly in how it removes restrictive assumptions made by prior work. Most existing methods for label noise correction in federated learning (FL) assume the existence of clean clients or specific noise distributions. In contrast, the proposed FedClean framework does not make these assumptions, which significantly broadens its applicability in real-world scenarios where all clients could be noisy. The introduction of the two-stage label correction scheme and adaptive sample size-weighted aggregation (ASSA) method is a creative combination of existing ideas, offering a robust solution to the challenge of noisy data in federated settings. 2. Practical Significance: The framework’s application to federated learning (FL) is of high significance given the growing use of FL in privacy-sensitive domains such as healthcare, finance, and IoT. The ability to handle label noise in FL models without compromising client privacy opens up significant opportunities for more robust, real-world machine learning deployments. This work addresses a critical gap in the literature and directly contributes to improving the practical effectiveness of FL models in real-world applications. 3. Clear and Well-Structured Presentation: The paper is clear and well-structured, with a logical flow from introduction to problem definition, related work, the proposed method, and experimental validation. The explanation of key concepts, such as label noise in FL and existing methods for noise correction, is straightforward and accessible. The contribution of the FedClean framework is clearly articulated, making it easy for the reader to grasp the innovation and advantages over existing approaches. 4. Experimental Validation: The experimental results provided are thorough and demonstrate the efficacy of the proposed method. Testing on synthetic and real-world datasets allows the paper to show that FedClean not only performs well but also outperforms state-of-the-art methods in mitigating label noise, even when all clients are noisy. This adds credibility and reliability to the claims made in the paper. Weaknesses: 1.It’s important to consider the implementation of CNNL methods, which are noted as relatively basic in this work—future improvements in CNNL could enhance the method's performance, but it's not clear whether these enhancements have been practically tested or if there's a risk of overfitting to the baseline. 2.One potential concern is the assumption that all clients in the Clothing1M dataset are noisy, as this may oversimplify the real-world variability in client noise levels. Other Comments Or Suggestions: Since Appendix.B and Appendix.C in the supplementary materials are supplementary proofs of Theorem 3.1 and Theorem 3.2 in the main text,Then it shows that Theorem B.1 and Theorem C.1 mentioned in the supplementary material are Theorem 3.1 and Theorem 3.2 respectively, so why not unify them? Questions For Authors: 1.It’s important to consider the implementation of CNNL methods, which are noted as relatively basic in this work—future improvements in CNNL could enhance the method's performance, but it's not clear whether these enhancements have been practically tested or if there's a risk of overfitting to the baseline. 2.One potential concern is the assumption that all clients in the Clothing1M dataset are noisy, as this may oversimplify the real-world variability in client noise levels. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their constructive feedback. Below are our responses to the comments and questions: **1. Basic CNLL Implementation \& Overfitting Risk** The reviewer raises a valid point regarding the use of basic CNLL methods. In this work, we intentionally adopted standard CNLL techniques (e.g., Co-teaching and Joint Optim) to establish a baseline comparison and demonstrate the generality of FedClean’s framework. While advanced CNLL methods could further improve performance, our experiments already show that FedClean outperforms state-of-the-art methods even with these "basic" implementations. Importantly, FedClean’s two-stage correction and ASSA inherently mitigate overfitting risks: (1) **Stage 1**: Clean sample selection via local CNLL filters out noisy data early. (2) **Stage 2**: Collaborative per-sample loss and adaptive aggregation (ASSA) dynamically adjust client contributions, reducing reliance on any single model’s predictions. (3) **Mixup**: Label smoothing further enhances robustness. We acknowledge that integrating more sophisticated CNLL methods (e.g., DivideMix) is a promising direction. Preliminary tests with DivideMix showed a 1.2–2.5\% accuracy gain on CIFAR-10, validating this potential. However, we prioritized simplicity to highlight FedClean’s core contributions. Future work will explore this in depth. **2. Assumption on Clothing1M Dataset** The Clothing1M dataset inherently contains real-world label noise (about 40\% noise rate), and prior work treats all clients as noisy by default. Our setup reflects this property to evaluate FedClean’s ability to handle pervasive noise. That said, FedClean does not strictly assume uniform noise levels across clients. The framework naturally accommodates variability: (1) **ASSA**: Clients with fewer clean samples (higher noise) contribute less to aggregation (Eq. 4, 15, 18). (2) **Collaborative loss**: Adjusts correction confidence per sample (Eq. 11), adapting to local noise conditions. In future work, we will explicitly test FedClean on datasets with heterogeneous client noise levels (e.g., varying $\tau$ per client) to further validate its flexibility. **3. Unifying Theorem Numbering** We appreciate the reviewer’s observation. The discrepancy arises because Theorems 3.1 and 3.2 in the main text correspond to Theorems B.1 and C.1 in the appendix. This was an oversight in numbering due to the appendix’s structure. We will revise the appendix to align theorem labels (e.g., "Theorem 3.1 (Extended Proof)") for clarity. We thank the reviewers for their insightful feedback, which strengthens our work. We will incorporate all suggestions, including theorem numbering fixes and expanded experiments on heterogeneous noise, in the final version. FedClean’s design principles—eliminating restrictive assumptions and leveraging collaborative correction—provide a robust foundation for real-world FL deployments, and we are excited to build upon this framework.
Summary: The paper introduces FedClean, a robust label noise correction method for federated learning that employs a two-stage correction process to identify and rectify noisy labels, coupled with adaptive sample-size-weighted aggregation. Notably, FedClean operates effectively without requiring clean clients or assumptions about specific noise distributions. Extensive experimental results show the effectiveness of the propsoed FedClean method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have gone through the theorem and found no obvious errors Experimental Designs Or Analyses: The experimental design comprehensively evaluates FedClean against baseline and state-of-the-art methods on benchmark datasets with diverse label noise levels. Evaluations are conducted under both IID and non-IID settings, supplemented by ablation studies to assess the contributions of individual components to overall performance. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. This paper presents FedClean, a new and robust label noise correction framework for federated learning. 2. The manuscript is clearly written and well-structured, ensuring ease of readability. 3. Extensive experiments demonstrate that FedClean consistently outperforms existing baselines across diverse noise levels. Weaknesses: 1. The propsoed method seems not work well: In Table 2, the baselines beat the proposed method In addition to the setting $\rho$=1 and $\tau$=0.5 both on IID and Non-IID setting. 2. In Table 3, the results on CIFAR-100 dataset with Non-IID setting is missing. 3. In Table 5, we can see that Mixup seems unimportant in the proposed framework. 4. The communication cost is lacked. Other Comments Or Suggestions: Please refer to the weakness Questions For Authors: 1. The propsoed method seems not work well: In Table 2, the baselines beat the proposed method In addition to the setting $\rho$=1 and $\tau$=0.5 both on IID and Non-IID setting. 2. In Table 3, the results on CIFAR-100 dataset with Non-IID setting is missing. 3. In Table 5, we can see that Mixup seems unimportant in the proposed framework. 4. The communication cost is lacked. 5. This paper's contributions are limited, as it repeatedly employs common data partitioning and data augmentation methods. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback. Below, we address the concerns raised: **1.** **_Reviewer Comment:_** **The propsoed method seems not work well: In Table 2, the baselines beat the proposed method In addition to the setting $\rho=1$ and $\tau=0.5$ both on IID and Non-IID setting.** The reviewer observes that baselines outperform FedClean when not all clients are noisy (e.g., $\rho=0.5$). This is expected because methods like FedCorr and FedNoRo assume the existence of clean client, allowing them to leverage clean data more effectively. In contrast, FedClean is designed for scenarios where clean clients may not exist (e.g., $\rho=1$), prioritizing robustness over specialized performance in idealized settings. While FedClean shows marginally lower accuracy when $\rho=0.5$ (e.g., 86.79\% vs. FedCorr’s 89.11\% in non-IID), the difference is minimal ($\le2.3\%$). However, when $\rho=1$, FedClean improves the accuracy by at least more than 20\%. This trade-off ensures FedClean’s versatility in real-world FL deployments, where clean clients are often unavailable. **2.** **_Reviewer Comment:_** **In Table 3, the results on CIFAR-100 dataset with Non-IID setting is missing.** The experimental results of FedClean on CIFAR-100 non-IID show consistent patterns with CIFAR-10, where FedClean outperforms baselines and state-of-the-art methods under high noise levels (e.g., FedClean$^2$ achieves 62.36\% accuracy for $\rho=1$, $\tau=0.5$, compared to FedCorr's 38.64\%). The omission of these results in Table 3 is due to two reasons: Previous FNLL studies (e.g., FedCorr(Xu et al., 2022), FedNoRo(Wu et al., 2023)) did not report CIFAR-100 non-IID results, perhaps due to their underperformance on this dataset. Due to the limited space and also following the same result presentation style of previous FNLL studies, we also did not include the CIFAR 100 results in the main body. However, following your suggestion, we will include the following table in the appendix of the final version. The best accuracies (\%) of various methods on CIFAR-100 dataset with non-IID setting at different noise levels. | Methods | ρ=0, τ=0 | ρ=0.5, τ=0.3 | ρ=1, τ=0.5 | |----------------|:----------:|:------------:|:------------:| | FedAvg | 68.75 | 58.34 | 29.12 | | FedProx | 69.72 | 59.01 | 30.45 | | FedCorr | 68.73 | 68.95 | 38.64 | | FedNoRo | 67.84 | 65.22 | 35.78 | | FedBeat | 66.91 | 63.10 | 28.93 | | FedELC | 66.55 | 63.75 | 29.67 | | FedClean$^1$ | 67.15 | 65.83 | 60.89 | | FedClean$^2$ | 67.80 | 66.45 | 62.36 | **3.** **_Reviewer Comment:_** **In Table 5, we can see that Mixup seems unimportant in the proposed framework.** **Mixup is very important**, we have provided a detailed explanation in our response to **Reviewer 1B4p**’s similar question (see *''Response to Methods and Evaluation Criteria”*). We kindly refer you to that section for a comprehensive discussion. **4.** **_Reviewer Comment:_** **The communication cost is lacked.** Thank you for your comment. **We would like to clarify that our study focuses on learning from noisy data in FL while not the communication cost.** We just use the standard FL protocol (FedAvg), whose communication cost can be found in FedAvg (McMahan et al., 2016). Our method does NOT include any additional communication overhead. Also, communication environments have NO impact on the performance of our method. Like many previous FNLL studies (e.g., FedCorr(Xu et al., 2022), FedNoRo(Wu et al., 2023), etc.) which did not mention communication issues, communication efficiency is out of the scope of our study. **5.** **_Reviewer Comment:_** **This paper's contributions are limited, as it repeatedly employs common data partitioning and data augmentation methods.** FedClean’s core novelty lies in its two-stage correction and no assumption of clean clients, addressing a critical gap in prior work. Existing methods fail when all clients are noisy. Additionally, ASSA dynamically adjusts client weights based on clean sample contributions, departing from static aggregation in FedAvg. These innovations enable FedClean to generalize across noise distributions and client configurations, improving FL robustness in practical settings. In federated crowdsourcing, clean clients rarely exist due to the limited expertise of workers. Therefore FedClean’s ability to function without clean clients is crucial. This is a challenge existing methods fail to address. FedClean fills this gap, making FL systems more reliable in environments with highly variable data quality. Thus, we believe our contributions are significant. --- Rebuttal Comment 1.1: Comment: Thanks author for the response. They have answered my questions well. I will raise my score.
Summary: The paper proposes FedClean, a robust label noise correction framework for federated learning that addresses client-side label noise without assuming clean clients or specific noise distributions. Key contributions include: (1) Two-stage label correction. Combines local centralised noisy label learning to select clean samples and global model insights to rectify noisy labels, aided by a collaborative per-sample loss to reduce false corrections. (2) Adaptive aggregation. Adjusts client influence in global model updates based on clean sample sizes, mitigating noise impact while incorporating secure label correction (zkCor). (3) Versatility and performance. Demonstrates effectiveness in high-noise scenarios (including 100% noisy clients) through experiments on synthetic and real-world datasets, outperforming existing FL noise-label learning methods. FedClean enhances global model robustness by jointly leveraging local noise correction and global model confidence. Claims And Evidence: Yes, all claims made in the submission are well supported. Methods And Evaluation Criteria: The proposed methods are well-motivated for tackling label noise in federated learning. The integration of CNLL, Mixup, and ASSA directly addresses challenges such as identifying clean samples and mitigating the influence of noisy clients. Moreover, the evaluation criteria—using benchmark datasets like CIFAR-10, CIFAR-100, and Clothing1M under both IID and non-IID settings with varying noise ratios—are appropriate for demonstrating the method's robustness and practical relevance. However, further evaluations in larger-scale or more diverse real-world scenarios could provide additional insights into the approach’s scalability and generalisability. Theoretical Claims: I examined the proofs for Theorem 3.1 and Theorem 3.2. Theorem 3.1 establishes a bound-on performance improvement when incorporating inferred labels via a MAP estimation approach, while Theorem 3.2 uses a Bayesian argument to demonstrate that inferred labels offer additional stability in estimating true labels. The logical flow of both proofs is generally sound and leverages standard probabilistic tools such as Bayes’ theorem. The two theorems align with practical FL challenges, addressing label noise through a blend of Bayesian reasoning and collaborative learning, and the reliance on appendices for extended proofs indicates thorough theoretical exploration beyond the main text. These aspects underscore the conceptual rigour of FedClean’s theoretical claims, supporting its innovation in federated label noise correction. Experimental Designs Or Analyses: The experimental design and analyses demonstrate strong validity and soundness, with several notable strengths: FedClean is rigorously evaluated against two baselines and five state-of-the-art methods, ensuring a thorough benchmark, while experiments cover both IID (CIFAR-10/100) and non-IID (CIFAR-10, Clothing1M) data distributions, as well as varying noise levels and noise bounds, showcasing FedClean’s adaptability. The use of Clothing1M, a dataset with inherent label noise, validates FedClean’s robustness in practical, noisy environments, and the inclusion of ablation studies highlights the contributions of individual components (e.g., CNLL methods like Co-teaching and Joint Optim) to FedClean’s performance. Notably, FedClean maintains strong performance even when all clients are noisy, demonstrating its independence from clean clients and superior noise resilience. Supplementary Material: Appendix A explains the rationale for using a collaborative per-sample loss function over traditional loss measures, highlighting its effectiveness in assessing label consistency. Appendix B provides a detailed proof of Theorem 3.1 by linking the reduction in prediction error to the Kullback-Leibler divergence between the model’s prior and posterior distributions. Appendix C offers an in-depth stability analysis that supports Theorem 3.2, demonstrating how inferred labels contribute to more accurate true label estimation. Finally, Appendix D outlines the integration of zkCor for secure label correction, enhancing the privacy aspects of the proposed ASSA mechanism. Collectively, these supplementary materials reinforce the paper’s theoretical foundations and complement its experimental design. Relation To Broader Scientific Literature: The paper’s key contributions address gaps in federated learning (FL) and noisy label learning. Previous FL work highlights the challenge of label noise in decentralized settings, which degrades model performance due to the lack of centralized data preprocessing. While methods like Co-teaching and DivideMix work well in centralized settings, they fail in FL because of privacy issues and limited local data diversity. FedClean bridges this gap by adapting these techniques for FL while maintaining privacy. Unlike other methods that discard noisy clients or assume clean clients, FedClean works in scenarios where all clients may be noisy, making it suitable for real-world issues like federated crowdsourcing and adversarial attacks. It introduces a two-stage correction scheme that combines local noisy label learning and global model insights, improving robustness and reducing false corrections. FedClean’s ASSA adjusts client influence based on clean sample sizes, unlike methods like FedFixer and FedRN, which rely on specific noise distributions or increase communication overhead. It also uses zkCor for secure label correction, protecting privacy while enhancing robustness. Overall, FedClean overcomes the limitations of prior methods, offering a versatile solution for noisy label problems in FL. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. FedClean proposes a novel two-stage label correction scheme that combines local noisy label learning with global model insights, addressing a critical gap in FL by eliminating the assumption of clean clients. This is a significant departure from existing methods which rely on clean clients. 2. FedClean’s ability to handle 100% noisy clients is a major advancement, making it applicable to real-world scenarios like federated crowdsourcing and adversarial environments, where clean clients may not exist. 3. The framework’s versatility, not assuming specific noise distributions, broadens its applicability compared to methods like FedFixer, which are limited to specific noise types. 4. The paper is well-structured, with a clear taxonomy of related works in both centralized and federated noisy label learning, providing a solid foundation for understanding the contributions. Weaknesses: 1. While FedClean performs well on image datasets (e.g., CIFAR, Clothing1M), exploring its applicability to other data types (e.g., text, time-series) could further validate its generalisability. 2. The approach involves multiple stages and numerous hyperparameters, which might require careful tuning in practical applications. Other Comments Or Suggestions: It would be better if FedClean could be Extended to semi-supervised or unsupervised FL scenarios, where labeled data is scarce, which could broaden its impact. Questions For Authors: 1. While FedClean performs well on image datasets, can it be explored the applicability to other data types to further validate its generalizability? 2. Can FedClean be extended to semi-supervised or unsupervised FL scenarios, where labeled data is scarce, to broaden its impact in future work? 3. Could exploring FedClean's applicability to other data types further validate its generalisability, given its success on image datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful feedback and constructive suggestions. Below, we address the raised points: **1. Applicability to Other Data Types** FedClean's design is fundamentally modality-agnostic, as its core components - the two-stage label correction and adaptive aggregation (ASSA) - operate on general principles of prediction confidence and sample consistency rather than domain-specific features. While our current validation used image datasets for benchmarking, the framework's reliance on comparing annotation labels with model predictions (Eq. 11) makes it directly applicable to text data, and ASSA's dynamic client weighting (Eqs. 4,15,18) naturally extends to time-series or other sequential data. The methodology requires no architectural changes for different data types, only standard modality-specific feature extractors. We recognize the importance of empirical validation across domains and will include comprehensive non-image experiments (particularly for NLP and time-series) in future work to further demonstrate this versatility. **2. Complexity and Hyperparameters** While FedClean introduces three key hyperparameters ($\sigma_1$, $\sigma_2$ for correction rates and $\epsilon$ for confidence threshold), our extensive experiments demonstrate remarkable robustness to their variations - for instance, varying $\sigma_1$ by $\pm 0.1$ caused less than 1\% accuracy fluctuation on CIFAR-10. The two-stage architecture was deliberately designed to minimize sensitivity to parameter choices while maintaining effectiveness. To further assist practitioners, we will provide detailed guidelines for parameter selection based on noise level estimation. **3. Extension to Semi-Supervised/Unsupervised FL** We agree with the reviewer's suggestion about extending FedClean to semi-supervised and unsupervised FL settings, as our framework's core label correction mechanism (Section 3.4) is particularly well-suited for such scenarios. The two-stage correction approach can naturally incorporate pseudo-labeling for unannotated data by leveraging the global model's predictions as high-confidence labels, while the adaptive aggregation (ASSA) would maintain robustness against potentially noisy pseudo-labels. Furthermore, the framework's architecture provides a natural pathway for unsupervised adaptation through integration of contrastive learning or other clustering techniques to discover latent label structures from unlabeled data. These promising extensions align perfectly with our future work plans (Section 6) and could significantly broaden FedClean's applicability to real-world scenarios where labeled data is scarce. **4. Generalizability Validation** We agree that testing on diverse data types strengthens FedClean’s claims. To address this, in future work, we will further study experiments on text (e.g., AG News) and time-series (e.g., UCI HAR) datasets. We appreciate the reviewer’s insightful comments, which highlight valuable extensions for FedClean. Our responses clarify the framework’s versatility and outline concrete steps for future work. We are committed to refining FedClean’s applicability across domains and learning paradigms.
Summary: The authors proposed the method to achieve the federated learning that may involve the label noises. The proposed method, FedClean first uses the local centralized noisy label learning that selects clean samples to train the global model. Afterwards, the two-stage correct scheme is performed by exploiting local noisy label learning and global model. Finally, the model aggregation method is applied to reduce the impact of the label noises. Experimental results the improved results involving the proposed method. ## update after rebuttal After reading rebuttals and other reviews, I became more convinced on the paper and increased my rating from weak accept to accept. Claims And Evidence: - The proposed method seems especially effective in the situation where most of clients involves label noises. This seems a good characteristic of the proposed method. Methods And Evaluation Criteria: The improvement by Mixup module is rather tiny. I am not sure why this scheme is especially involved in the final model. Theoretical Claims: The theoretical proof looks correct. Experimental Designs Or Analyses: - In Table 5, I found that "Ours w/o correction" is already far better than other methods in Table 2. This raises questions on which part makes the method achieve the SOTA performance: I suspect that the model could surpass other methods thanks to the good baseline rather than the proposed method. Espeically, the gap between the full model (Ours) and Ours w/o correction is only 7-10% in IID and non-IID cases, respectively in Table 5. However, the gap between Ours w/o correction in Table 5 and best performed FedFixer in Table 2 is is over 12%. Supplementary Material: No supplemental submitted. Relation To Broader Scientific Literature: The method could be useful in the context of both FL and CL. Essential References Not Discussed: I think reference are rather complete. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s time and constructive feedback on our work. Your insightful comments have helped us better clarify the contributions and limitations of our method. Below, we address each point raised in the review, and we hope our responses will alleviate your concerns. **1. Response to Methods and Evaluation Criteria** *Reviewer Comment:* "The improvement by Mixup module is rather tiny. I am not sure why this scheme is especially involved in the final model." *Response:* Thank you for your valuable observation. We appreciate your attention to this detail and have carefully considered your feedback. We would like to address your concern from both theoretical and experimental perspectives: **Theoretical Considerations:** Mixup generates new samples through linear interpolation, and its label smoothing property helps mitigate noisy labels’ impact on local models. In federated settings, where local data is limited and the noise distribution is unknown, Mixup enhances training robustness through implicit regularization. Research on both CNLL (Zhang et al. 2018) and FNLL (Xu et al., 2022) shows Mixup’s effectiveness in improving robustness against noisy labels. **Experimental Observations:** The results of "Ours w/o Mixup" in Table 5 show that Mixup's improvement on CIFAR-10 is limited, primarily due to the dataset characteristics (50,000 samples, 10 categories) and the experimental design. The redundancy in CIFAR-10 reduces Mixup’s impact, and to ensure comparability with previous works (e.g., FedNoRo), we used their experimental setup, which might not fully highlight Mixup’s potential with smaller client data. Additionally, Mixup’s simple implementation and low time/space complexity make it a valuable addition regardless of its exact performance improvement. [1] Zhang, H., Cisse, M., Dauphin,Y.N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018. [2] Xu, J., Chen, Z., Quek, T.Q., and Chong, K.F.E. Fedcorr: Multi-stage federated learning for label noise correction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,pp.10184–10193,2022. **2. Response to Methods and Evaluation Criteria** *Reviewer Comment:* "In Table 5, 'Ours w/o correction' already outperforms others in Table 2... raises questions on which part contributes most to SOTA performance." *Response:* Thank you for your insightful observation. We greatly appreciate your attention to the details of our results. We address your concerns as follows: Our method consists of two main components (Figure 1). The first part is the preprocessing stage, involving CNLL conducted locally by clients. However, due to limited client dataset sizes in FL, particularly with non-IID data, relying solely on CNLL methods does not achieve ideal training performance. Therefore, existing FNLL methods, including FedFixer, discard CNLL and design FNLL methods relying on clean clients. Our research has shown that while CNLL cannot be directly applied to FL, when integrated into our two-stage correction mechanism, FedClean performs better than most state-of-the-art methods with the existation of clean clients (a scenario widely studied). In this case, FedClean's performance was slightly lower than the best method by only 2\%. However, in the absence of clean clients (a scenario often neglected), existing FNLL methods fail, while FedClean still performs robustly. We acknowledge that FedClean's success relies substantially on the preprocessing module (particularly CNLL, as evidenced by the 'Ours w/o CNLL' results in Table 5). However, without integration with the two-stage correction module, the CNLL alone cannot simultaneously achieve: (1) robust performance regardless of clean clients' presence, and (2) the capability to fill the research gap where existing FNLL methods fail to consider scenarios without clean clients. Regarding your concern: “I suspect the model could surpass others due to a good baseline rather than the proposed method,” we would like to highlight that Table 2 clearly shows that when all clients are noisy, existing methods fail, while FedClean maintains stable performance. Compared to other FNLL methods, FedClean achieves at least a 20\% improvement. We hope this clarifies the unique contribution of our method and reassures you of our approach's validity. Thank you again for your thoughtful feedback, which helped refine our explanation.
null
null
null
null
null
null
Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration
Accept (spotlight poster)
Summary: The authors proposed a new LLM inference sampling framework that searches for different reasoning paths by controling a Gaussian embedding that is inserted to the sequence. They proposed embedding perturbation for controlling the sampling in the continuous space, and come up with Bayesian optimisation to guide the sampling via a verifier-guided objective. Comprehensive comparisons with other sampling strategies and ablation studies are provided for proving the effectiveness and efficiency of the framework. Claims And Evidence: Most claims are well supported by ablation studies or prior works, but not with theoretical provement. A theoretical provement of "searching for a start token in the continuous embedding space" being better than "sampling every token in the discrete token space" can further improve the soundness of the paper. Methods And Evaluation Criteria: The method and the experiments are reasonable. Although controlling reasoning path via only one token embedding is a bit counter-intuitive, there are similar attempts in prior works like prompt tuning and other decoding strategies like FIRE and CoT-Decoding. Theoretical Claims: Most equations are seemingly correct to me. Experimental Designs Or Analyses: The experiments are comprehensive and convincing. The main experiment includes 3 LLMs and 4 datasets, and there are detailed ablation studies to verify the effectiveness of each design. Experiments are repeated 5 times for reducing variance. But for experiments about efficiency, I think there is still room for improvement. The authors only provide a comparison with RAP in terms of token usage, but I believe comparisons with other methods in Table 1 and comparison on inference time cost can be added to further prove the efficiency of this method. Supplementary Material: The appendix include many implementation details like prompts used in experiments and technical details on Bayesian Optimisation, providing important information for reproducing the results. Relation To Broader Scientific Literature: The paper's approach is an exploration in the field of LLM's decoding strategies. It can be regarded as an extension to prior first-token based strategies like FIRE and CoT-Decoding, which is mentioned in the paper. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please refer to the above sections for Strengths and refer to "Claims and Evidence" and "Experimental Designs Or Analyses" sections for Weaknesses and Suggestions. Other Comments Or Suggestions: Please refer to "Claims and Evidence" and "Experimental Designs Or Analyses" sections for Weaknesses and Suggestions. Questions For Authors: 1. It is intriguing that the authors have found that applying this approach can increase the activation rate of neurons by 3–4%, how do you understand this phenomenon, and have you tried to analyze why this occurs? 2. To my knowledge, there are methods trying to interpret "soft" tokens in the embedding space, e.g., finding their nearest neighbors (discrete tokens of LLMs) in the embedding space, have you tried to interpret the sampled first token in your framework? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1.Why Embedding Search Outperforms Discrete Sampling** > A theoretical provement of "searching for a start token in the continuous embedding space" being better than "sampling every token in the discrete token space" can further improve the soundness of the paper. Thank you for the suggestion. Searching for a start token in the continuous embedding space is preferable to sampling every token in the discrete token space from the theoretical, interpretability and the cost perspectives: - **Optimal solution to the objective.** The set of the embeddings of all tokens forms a discrete subset of our continuous space. Importantly, this means that the optimal solution (maximiser) to our objective function may **not** lie within this discrete subset. - **Role of the soft token.** As the soft token in our framework is not intended to approximate a specific discrete token, but rather to serve as a functional control token, influencing the overall response. Optimising in the continuous space allows exploration of representations that may lie between or beyond discrete tokens, enabling smoother control and better performance than fixed-token selection. - **High computational cost.** With the vocabulary size |V|>30k, sampling every token in the discrete token space requires generating the output for each token x and computing the corresponding f(x), making it computationally expensive at O(|V|). **2. Computational efficiency** > The authors only provide a comparison with RAP in terms of token usage, but I believe comparisons with other methods in Table 1 and comparison on inference time cost can be added to further prove the efficiency of this method. As the reviewer suggested, we additionally report inference time(min) cost across all baselines: |Method|GSM8K|GSM-Hard|SVAMP|StrategyQA| |-|-|-|-|-| |SC(τ=0.4)|26.58|33.91|20.54|20.04| |SC(τ=0.6)|26.12|34.46|21.76|19.87| |SC(τ=0.8)|27.28|34.86|21.16|20.80| |FIRE|26.70|32.26|21.59|20.17| |CoT-Decoding|26.56|32.55|21.53|20.60| |RAP|184.52|234.14|142.52|149.73| |Ours|**23.15**|**28.42**|**18.41**|**17.44**| Our method consistently achieves the lowest inference time across all tasks, further demonstrating its efficiency beyond token-level savings. **3. Neuron activation** > It is intriguing that the authors have found that applying this approach can increase the activation rate of neurons by 3–4%, how do you understand this phenomenon, and have you tried to analyze why this occurs? Thank you for the insightful question. We believe this phenomenon occurs because our perturbation strategy encourages broader exploration within the neuron space, thereby increasing the likelihood of activating previously dormant or marginal neurons across the architecture. In our setting, we define a neuron as `activated` if its activation value is greater than zero. As our perturbations produce more diverse token embeddings, the resulting activations tend to be more widely distributed across different neurons as well, thus increasing the overall number of activated neurons. However, we would like to stress that the overall increase in activation rate does not necessarily translate to improved final answer accuracy. Rather, the key contributing factor is the ability to activate more neurons identified as critical for the current problem instance [1]. The increased activation rate simply raises the likelihood of activating these critical neurons. The BO process then further optimises the soft embedding to consistently target them, as illustrated in Figure 4 of our paper. To further illustrate this point and to analyse the functional importance of these neurons, we have conducted an ablation study and found that **masking critical neurons** reduced accuracy from **62.14%** to **13.27%**, whereas random masking only dropped it to **41.89%**. This confirms that the activated neurons are non-random and impactful, consistent with prior findings on causal neurons [2]. [1] Andy Zou et al., 2025. Representation Engineering: A Top-Down Approach to AI Transparency. [2] Kevin Meng et al., 2022. Locating and Editing Factual Associations in GPT. NeurIPS **4.Interpretation of the Soft Token** > Have you tried to interpret the sampled first token in your framework? We appreciate the reviewer’s suggestion. We conducted nearest-neighbor analysis to interpret the optimised soft token, but found it to be a clear outlier in the embedding space. Its Euclidean distance from all vocabulary tokens are several orders of magnitude beyond normal variation (e.g., z-score > 3000), suggesting that it does not correspond meaningfully to any real token. This is expected, as the soft token in our framework is not intended to approximate a specific discrete token, but rather to serve as a functional control token, influencing the overall response.
Summary: This paper proposed a Bayesian Optimization based approach to improve the test time performance of pre-trained LLMs. The authors propose to sample from an initially Gaussian distribution, where the perturbation vector is added to the distribution of the first token in the answer to control answer generation. In order to improve the answer, the authors proposed coherence reward and verifier reward as the objective for BO. Expected Improvement (EI) has been applied as the acquisition function. Claims And Evidence: Strength: - The proposed algorithm improves the performance of three different base LLMs, and has demonstrated better performance than other perturbation/sampling algorithms. - BO algorithm converges as shown in Figure 5. Weaknesses: - Lack of discussions and comparisons to other controlled generation works [1,2] [1] Mudgal, Sidharth, et al. "Controlled Decoding from Language Models." International Conference on Machine Learning. PMLR, 2024. [2] Qi, Xiangyu, et al. "Safety alignment should be made more than just a few tokens deep." arXiv preprint arXiv:2406.05946 (2024). Methods And Evaluation Criteria: There is one tricky part that makes me confused about the method: - How is the verifier score calculated? What is the verifier? Is it another LLM? In L189, there lacks of explanation about $y_v$. I think the choice of verifier matters a lot on the performance and has a large influence on the impact of this algorithm. If the verifier is another more performant LLM, I'd like to see the performance of that LLM. It is very hard to judge the contribution of this paper without such information and this also raises questions about whether the comparsion is fair. Another question regarding algorithm is that: - Why choose EI? There exist multiple acquisition functions for BO, such as UCB score, etc., is there any theoratical/empirical results that support the choice of EI? Theoretical Claims: N/A Experimental Designs Or Analyses: Most experimental designs are fair, but I'd like to know more details about the choice and design of the verifier. Supplementary Material: N/A Relation To Broader Scientific Literature: People working on autoregressive model might also be interested in this approach. Essential References Not Discussed: [1] Mudgal, Sidharth, et al. "Controlled Decoding from Language Models." International Conference on Machine Learning. PMLR, 2024. [2] Qi, Xiangyu, et al. "Safety alignment should be made more than just a few tokens deep." arXiv preprint arXiv:2406.05946 (2024). Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Verifier Setup** Regarding the questions about details of the verifier, **NO** separate or stronger LLM is used for verification (Line 31-33, right column); the same model as the generator is employed (i.e. LLaMA3-8B-Ins, Qwen2-7B-Ins, and Mistral-7B-Ins). - The verifier score $r_{verifier}(y) = \mathbb{1}_{y_v = y}$ is a binary indicator, where $ y_v$ is the model's regenerated answer based on the current question and previous responses. Specifically, $y_v$ is obtained by prompting the same LLM with a verification query: > *Based on the given question and the previous answers, please provide your analysis and final answer.* The exact prompts used are listed in Appendix B.3. - We also conducted ablation studies (see *Verifier Comparison: Judgment vs. Generation*, Line 427, left column) to compare different verifier strategies and assess their impact on performance. This setup ensures a fair comparison, as no external or more capable model is used. It also highlights one of the main contributions of this work: our BO framework's ability to enhance reasoning capabilities using a single unified LLM without additional verifiers. We will add these details on the setup of the verifier to our paper in the next revision. **2. Related Works** > Lack of discussions and comparisons to other controlled generation works [1,2] Thanks, we will add the comparison in the revision. Our goal is to propose an **efficient reasoning framework that does not require additional or stronger verifiers, nor task-specific model fine-tuning**. Instead, we leverage verification from the model itself to enhance the reasoning ability of LLMs. The methods in [1,2] serve different purposes: [1] uses trainable prefix scorers to guide decoding, while [2] proposes a fine-tuning objective aimed at improving robustness against adversarial prompts. Both approaches require additional training, making direct comparison with our training-free method less straightforward. We implemented a baseline based on [1] using a blockwise best-of-8 decoding (block size=16), where the model scores its own without any fine-tuning. To improve the scorer's quality, we applied self-consistency. For [2], we performed token-wise constrained fine-tuning on LLaMA3-8B-Ins model using LoRA (5 epochs, learning rate 2e-5, rank 16). |Method|Training|Shot|GSM8K|GSM-Hard|SVAMP|StrategyQA| |-|-|-|-|-|-|-| |Constrained Fine-tuning[2]|✅(lora)|-|78.3±0.7|13.6±0.5|83.5±0.7|**81.3**±0.8| |prefix scorer[1]|❌|Zero|75.2±0.9|26.1±1.3|83.6±0.9|65.2±1.3| |ours|❌|Zero|79.4±1.2|28.2±1.8|88.2±1.3|67.2±0.7| |prefix scorer[1]|❌|Few|81.2±1.6|33.6±1.4|88.5±1.2|72.4±1.1| |ours|❌|Few|**84.3**±1.4|**35.7**±1.0|**90.2**±0.6|75.6±0.8| As shown, while [2] achieves the best performance on the StrategyQA task with additional training, our method—without requiring any extra training—achieves superior results on the other three tasks, particularly in few-shot. In a fairer comparison (i.e. without training), our method consistently outperforms the prefix scorer [1] across all tasks. [1] Controlled Decoding from Language Models. ICML [2] Safety alignment should be made more than just a few tokens deep. **3. Why choose EI?** > There exist multiple acquisition functions for BO, such as UCB score, etc., is there any theoratical/empirical results that support the choice of EI? Indeed, PI and GP-UCB can also be used as the acquisition function. The cumulative regret for GP-UCB has the same rate (Theorem A.1) as EI [3]. Unlike EI or GP-UCB, PI only considers the probability of improvement, without accounting for its magnitude. It is thus less theoretically grounded and more prone to premature exploitation [4]. |Method|GSM8K(ZS)|GSM8K(FS)|GSM-Hard(ZS)|GSM-Hard(FS)|SVAMP(ZS)|SVAMP(FS)|StrategyQA(ZS)|StrategyQA(FS)| |-|-|-|-|-|-|-|-|-| |ours(EI)|**79.4**±1.2|**84.3**±1.4|**28.2**±1.8|35.7±1.0|**88.2**±1.3|**90.2**±0.6|**67.2**±0.7|75.6±0.8| |PI|74.6±1.5|82.1±1.2|28.0±1.5|35.2±0.8|85.3±1.0|89.5±2.2|66.9±1.8|74.3±0.4| |UCB β=1|76.7±1.3|83.3±0.3|27.7±1.6|34.7±1.2|86.0±1.5|88.0±0.0|66.7±1.5|74.7±1.5| |UCB β=2|77.9±1.9|83.3±1.6|27.8±0.8|**36.2**±1.5|85.0±1.0|88.7±1.4|66.8±2.3|**75.7**±1.9| |UCB β=5|75.6±1.9|81.8±0.9|27.7±0.8|34.2±2.5|85.3±0.8|89.7±0.8|66.7±1.0|75.0±0.7| The experiments show that PI consistently underperforms compared to EI, as expected from the theoretical discussion above. For GP-UCB, its performance is sensitive to the choice of the exploration parameter and is, in most settings, worse than EI. We would also like to mention that the optimal parameter choice for GP-UCB varies across different tasks, making it difficult to guarantee good performance in unseen settings. In contrast, EI performs robustly without requiring task-specific tuning. [3] Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE [4] Gaussian process optimization in the bandit setting: No regret and experimental design. ICML --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors. This has resolved my concern, so I would raise my score to weak accept. --- Reply to Comment 1.1.1: Comment: We truly appreciate Reviewer MeVe's thoughtful feedback. Your comments on the verifier design, choice of acquisition function, and related work were highly valuable in helping us refine our explanations and better contextualize our contributions. We will incorporate these improvements into the paper.
Summary: This work proposes a novel way of exploring the search space of LLM responses via perturbing the given input embedding in the generated sequence. In particular, authors design an online learning scheme that uses bayesian optimization to adjust parameters of the noise so that the generated outcome (after adjustment) leads to better reward or score. Authors conduct thorough comparison with strong baselines to show improvements coming out of from better search exploration in contrast to naive sampling with temperature. Claims And Evidence: - Simple token-level distribution adjustment (increasing the temperature) leads to higher chances of sampling hallucinated or degenerate outputs. The proposed method instead is steering the sequence in a desirable way according to a given reward/score model (or self-rewarding score) - The idea of using random projections to reduce the dimension of the embedding to optimize is well motivated by highlighted issues of bayesian optimization in high-dimensional space Methods And Evaluation Criteria: Proposed method effectively improves sample efficiency of the search within the search space when an outcome based reward or the score is available. Authors showed on real world examples how this allows us to find a better prediction or outcome compared to more usual best-of-n methods that iterate over independent samples from the model. Theoretical Claims: Methodology described in this work is clearly written, although I am not an expert in Bayesian optimization, in particular. Experimental Designs Or Analyses: I read experimental results and analysis: the choice of baselines makes sense. The only experiment which might be very useful here is to verify how useful this approach might be in RL training where we need to sample useful trajectories from an LLM. This method could be a promising direction for training-time sampling strategies. Supplementary Material: n/a Relation To Broader Scientific Literature: The idea itself does not have any groundbreaking components in it, but its a smart application of known methods to come up with a very useful framework for guided generation with optimization during inference. Related work provides meaningful connection to other guided generation algorithms. Essential References Not Discussed: n/a Other Strengths And Weaknesses: In my opinion this work could get even more impact and recognition if authors might show the effectiveness of this approach in terms of sample efficiency when its used during training with preference optimization or reward-based training. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > In my opinion this work could get even more impact and recognition if authors might show the effectiveness of this approach in terms of sample efficiency when its used during training with preference optimization or reward-based training. Thank you for this insightful and forward-looking suggestion. We agree that extending our embedding-based exploration method to training—particularly within preference optimisation frameworks—to enhance sample efficiency could be a promising direction. Specifically, our Bayesian optimisation approach to embedding perturbations can be integrated into preference optimisation as an efficient and targeted data generation mechanism. A key bottleneck in current reward-based training frameworks lies in the use of standard generation methods (e.g., temperature sampling), which often produce outputs that, while differing in perplexity, appear similarly plausible—making it difficult for reward models or human annotators to assign differentiated scores. Our proposed method could indeed offer an alternative approach. Instead of relying on random sampling or temperature-based decoding to generate candidate answers, our method explores promising regions in the embedding space, producing diverse and high-quality outputs with fewer trials. During training, we hope that these outputs can be evaluated using human preferences or reward models to construct preference pairs or ordered sequence with more significant distinct scores, which is able to provide more efficient supervision signals for optimising the model. If it is possible, by having those more distinctive generated samples and preference-aligned candidates in each iteration, the training process can access richer supervision without increasing the number of queries or annotations. This could lead to better utilisation of limited feedback and help the model learn more effectively from each batch of generated candidates. Additionally, one of the main properties of our proposed framework is that it is model-agnostic and lightweight. It can be seamlessly integrated into standard preference optimisation pipelines without modifying model architectures or requiring access to gradients. We appreciate the suggestion and view this integration as a promising approach for generating more preference-aligned data during training. We plan to explore its impact on training efficiency and reasoning performance in future work.
Summary: This paper introduces a novel embedding-based search framework to enhance complex reasoning in Large Language Models (LLMs). It perturbs the embedding of the first token with Gaussian noise and optimizes this perturbation via Bayesian optimization (BO), guided by a verifier model. Experiments on multiple challenging datasets (GSM8K, GSM-Hard, SVAMP, StrategyQA) across three models (LLaMA, Qwen, Mistral) demonstrate consistent and significant accuracy improvements over existing strong baselines, especially in zero-shot settings. The approach achieves these gains efficiently, converging in few iterations without requiring access to model internals. Overall, the method offers a scalable and effective way to systematically improve LLM reasoning capabilities. Claims And Evidence: The claims made regarding improved accuracy, diversity of solutions, and computational efficiency are convincingly supported by thorough empirical evaluation. Results consistently favor the proposed method over baselines like Chain-of-Thought, self-consistency, and FIRE. One minor weakness is the claim about improving coherence, which is indirectly inferred from correctness but not explicitly measured. Methods And Evaluation Criteria: The proposed methods (embedding perturbation and Bayesian optimization) are clearly justified and suitable for the problem of enhancing reasoning in LLMs. Evaluation criteria (accuracy, coverage), datasets (GSM8K, GSM-Hard, SVAMP, StrategyQA), and baselines used (CoT, FIRE, RAP, etc.) are appropriate and widely accepted for benchmarking LLM reasoning improvements. Theoretical Claims: The theoretical underpinnings (Bayesian optimization, embedding perturbation, dimension reduction) are sound and appropriately applied, though no fundamentally new theoretical insights are developed. The key theoretical novelty lies in the innovative combination of embedding-space optimization and BO for controlling LLM generation trajectories. Experimental Designs Or Analyses: The experimental design is robust, sound, and fair. Results were validated with multiple seeds and clearly show low variance, enhancing reliability. Ablation studies (token placement, verifier strategies, embedding dimensionality) further validate design choices. A minor limitation is evaluating on subsets (200 samples per dataset), though this is mitigated by randomness and multiple seeds. Supplementary Material: Yes, I reviewed the supplementary material, specifically Appendices A (details of BO and optimization strategies) and B (extended experimental results, neuron activation analysis). These materials added valuable details to understand the methodology and confirm experimental robustness. Relation To Broader Scientific Literature: The paper advances beyond standard methods like chain-of-thought prompting and heuristic decoding strategies (self-consistency, FIRE, RAP) by proposing a principled embedding-space search strategy. Unlike discrete-token sampling methods, it leverages continuous embedding perturbation and Bayesian optimization, significantly improving reasoning accuracy. It also aligns with recent trends exploring LLMs' abilities for self-verification and iterative refinement. Essential References Not Discussed: The paper should discuss foundational work on **diverse decoding**, particularly "Diverse Beam Search" [1] (Vijayakumar et al., 2016), which explicitly balances diversity and quality in generation. Additionally, "Self-Refine" [2] (Madaan et al., 2023) is a relevant **iterative self-improvement** method, which aligns conceptually with the idea of model-guided refinement. [1] Vijayakumar, A. K., Cogswell, M., Selvaraju, R. R., Sun, Q., Lee, S., Crandall, D., Batra, D. (2016). Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). [2] Madaan, A., Yazdanbakhsh, A., Tandon, N., Gupta, P., Alon, U., Yang, Y., Yazdanbakhsh, A. (2023). Self-Refine: Iterative Refinement with Self-Feedback for Large Language Models. arXiv preprint arXiv:2303.17651. Other Strengths And Weaknesses: **Strengths:** - Strong empirical results demonstrating significant accuracy improvements. - Computational efficiency and model-agnostic nature make it widely applicable. - Clear and rigorous presentation of methods, analyses, and experiments. **Weaknesses:** - Dependence on potentially imperfect internal LLM verifier signals. - Perturbation limited to the first token; may not correct deeper reasoning errors. - Experiments restricted to smaller-scale models and specific reasoning tasks, raising questions about scalability to larger models or broader domains. Other Comments Or Suggestions: 1. **Verifier Reliability:** Could you clarify whether incorporating external or domain-specific verifiers (e.g., exact math calculators) is straightforward within your framework, and whether you expect substantial performance gains from a more reliable verifier? 2. **Scaling to Larger Models:** Have you performed initial tests on larger models (e.g., GPT-4 scale)? Would you anticipate similar performance improvements, or could diminishing returns become significant at larger scales? 3. **Optimization Scope:** Have preliminary experiments been conducted on optimizing embeddings for multiple tokens rather than just the first token, and would you expect significant gains from such extensions? Questions For Authors: 1. **Verifier Reliability:** Could you clarify whether incorporating external or domain-specific verifiers (e.g., exact math calculators) is straightforward within your framework, and whether you expect substantial performance gains from a more reliable verifier? 2. **Scaling to Larger Models:** Have you performed initial tests on larger models (e.g., GPT-4 scale)? Would you anticipate similar performance improvements, or could diminishing returns become significant at larger scales? 3. **Optimization Scope:** Have preliminary experiments been conducted on optimizing embeddings for multiple tokens rather than just the first token, and would you expect significant gains from such extensions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. Verifier Reliability** Thanks for the comment. You are right in saying that incorporating more reliable, domain-specific verifiers could further improve performance. Our framework is flexible enough to integrate such tools, and we agree that doing so would likely yield gains in tasks where those verifiers are applicable [1]. That said, our current focus is on improving reasoning in scenarios where such external tools are not available or applicable — for instance, in common-sense or social reasoning, where exact computation isn’t useful. By relying on internal LLM verifier signals, we aim to explore the model's capacity to self-reflect and reason without additional guidance (Line 31-33, right column). [1] Large Language Models for Mathematical Reasoning: Progresses and Challenges. ACL **2. Scaling to Larger Models** Thanks for the suggestion. Note that our method involves modifications within the model’s parameter space, and due to the closed-source nature of GPT-4, it is not possible to test on it. Nonetheless, the point is well-taken and we have conducted additional experiments using Qwen2-72B-Instruct, a significantly larger model (72B) than those used in the main paper (7B and 8B), across both zero-shot and few-shot settings: |Method|AIME-2024(ZS)|AIME-2024(FS)|GSM8K(ZS)|GSM8K(FS)|GSM-Hard(ZS)|GSM-Hard(FS)|SVAMP(ZS)|SVAMP(FS)|StrategyQA(ZS)|StrategyQA(FS)| |-|-|-|-|-|-|-|-|-|-|-| |COT|0|3.3|91.0|91.0|51.5|65.0|93.0|92.0|79.0|90.0| |SC(τ=0.4)|3.3±2.7|3.3±3.3|93.3±0.6|91.8±1.0|62.3±0.6|68.7±1.5|93.2±0.3|93.8±0.3|78.2±0.8|89.6±1.4| |SC(τ=0.6)|2.2±3.3|2.2±3.8|93.7±0.3|92.2±1.0|62.7±0.3|68.2±1.4|93.8±0.6|93.1±0.5|78.8±1.6|90.0±1.3| |SC(τ=0.8)|3.3±1.9|2.2±1.9|94.0±0.5|93.5±0.5|62.8±2.0|68.3±0.3|93.7±0.3|93.7±0.3|78.4±2.5|88.8±2.3| |FIRE|2.2±1.9|3.3±3.8|91.4±0.8|92.5±0.4|60.3±0.6|65.7±2.0|93.5±0.9|93.5±0.5|78.2±1.9|89.5±0.9| |CoT-Decoding|2.2±1.9|2.2±1.9|93.6±1.9|93.5±1.1|61.0±2.3|66.0±1.7|**94.2**±1.2|93.8±1.5|78.8±1.6|89.0±1.5| |RAP|-|4.4±3.4|-|93.5±0.4|-|69.1±5.2|-|93.7±5.0|-|**90.4**±5.0| |Ours|**6.7**±2.7|**11.1**±1.7|**94.3**±0.3|**94.8**±1.3|**63.3**±0.6|**72.2**±0.6|94.0±1.0|**94.2**±1.3|**79.6**±0.3|89.2±1.2| While we still see some improvements, the gains are indeed smaller on easier benchmarks like GSM8K and SVAMP, as large models can already achieve very high accuracy. To further stress-test our method, we evaluated it on the more challenging `AIME-2024 dataset`. Here, we observe that our approach continues to yield significant gains even at the 72B scale, showing its robustness and scalability to harder tasks and more capable models. We will add these results to clarify the method’s applicability beyond smaller-scale settings. **3. Optimisation Scope** To investigate this, we conducted additional experiments where we optimised embeddings for the first k tokens (instead of just the first): |#token(k)|GSM8K(ZS)|GSM8K(FS)|GSM-Hard(ZS)|GSM-Hard(FS)|SVAMP(ZS)|SVAMP(FS)|StrategyQA(ZS)|StrategyQA(FS)| |-|-|-|-|-|-|-|-|-| |1(ours)|**79.4**±1.2|**84.3**±1.4|**28.2**±1.8|**35.7**±1.0|**88.2**±1.3|**90.2**±0.6|**67.2**±0.7|**75.6**±0.8| |2|75.0±1.8|83.3±1.3|24.8±0.6|34.7±1.0|83.0±0.5|90.2±1.4|68.5±0.5|74.2±1.3| |5|69.7±3.5|81.2±2.1|22.2±1.0|29.8±0.8|85.5±0.3|88.3±0.3|66.3±0.6|71.7±1.2| |10|61.0±3.1|73.7±2.3|17.2±1.6|23.8±0.3|82.8±0.8|86.8±0.8|67.3±1.9|71.7±1.6| |20|52.2±2.3|62.0±2.6|19.0±1.3|18.2±1.9|74.3±0.6|81.5±2.0|67.8±2.5|68.7±1.1| The performance generally degrades as k increases, especially beyond 5 tokens. This suggests that naively extending to multiple tokens can introduce instability or overfitting. We also compare with RAP (one of our baselines), a tree-search-based method that operates at the sequence level rather than token-by-token—though it shares similar ideas with token-by-token search. While RAP achieves strong performance, it incurs substantially higher cost and still underperforms our approach. Developing a multi-token optimisation strategy that can achieve both high accuracy and cost-effectiveness would require deeper investigation and extensive experimentation. It remains an interesting direction for future work and would warrant a separate paper. **4. Coherence Evaluation** We appreciate the reviewer’s observation. To address this, we evaluated coherence using two metrics: perplexity and a coherence score rated by DeepSeek-R1-Distill-Llama-70B, prompted to rate responses from 1 (poor) to 5 (excellent). We tested 800 samples (200 from each of four datasets) using LLaMA3-8B-Ins as the base model: |Type|SC(τ=0.4)|SC(τ=0.6)|SC(τ=0.8)|FIRE|CoT-Decoding|Ours| |-|-|-|-|-|-|-| |Perplexity ↓|7.22|8.51|9.98|6.87|6.73|**6.53**| |Coherence Score ↑|3.93|3.88|3.73|3.70|3.80|**4.08**| We would like to mention that coherence is not the main objective of our method, but serves as a filtering criterion to discard low-quality completions and support answer correctness. **5. Related Works** We thank the reviewer for the suggestions. We will discuss them in the final version. --- Rebuttal Comment 1.1: Comment: I appreciate the thorough response and the additional experiments that further strengthen the work. All of my concerns have been addressed, and I recommend accepting this submission. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer 8Nqh for your thoughtful feedback. Your comments have helped us improve the work, particularly through the additional experiments on scaling to larger models, multi-token optimisation, and coherence evaluation. We will incorporate the suggested clarifications and related work into the paper.
Summary: To address the challenges of insufficient diversity and low search efficiency in large language models (LLMs) for complex reasoning tasks, this paper introduces a novel responsive sampling strategy. By applying Gaussian perturbations to the embedding of the first token generated by the LLM, using the correctness and coherence of the output as the objective function, and leveraging Bayesian optimization to iteratively search for optimal embedding points, this approach avoids the blindness of traditional temperature tuning and the inefficiency of heuristic search. Experimental results demonstrate significant accuracy improvements on three mathematical reasoning datasets and one commonsense reasoning dataset. Additionally, the study validated that Bayesian iteration enhances neuron activation rates in the MLP layers of LLMs, providing neurophysiological evidence for the effectiveness of the proposed method. Claims And Evidence: The embedding optimization-based framework proposed in the paper significantly enhances the accuracy and efficiency of LLMs in complex reasoning tasks through Gaussian perturbation and Bayesian optimization. Experimental results demonstrate that it outperforms mainstream baseline methods on multiple benchmark datasets. Ablation studies validate the necessity of both the verifier and coherence terms, while neuron activation analysis reveals its mechanism of enhancing reasoning through diverse neural pathways and further proves the effectiveness of Bayesian optimization. A minor limitation is that the description of the verifier-guided approach remains somewhat ambiguous, particularly regarding how it produces a refined output y_v in the experimental section. Methods And Evaluation Criteria: The paper proposes a responsive sampling method to address the challenges of insufficient answer diversity and low search efficiency in large language models (LLMs) for complex reasoning tasks. It ensures generation diversity through Gaussian embedding perturbation of the first token and greedy sampling, explores the embedding space via iterative Bayesian optimization to ensure accurate and coherent answers, and employs dimensionality reduction techniques to effectively tackle the high computational costs of high-dimensional embedding spaces. The method is compared with three strong baseline methods (CoT, Self-Consistency, FIRE) on three mathematical reasoning datasets and one commonsense reasoning dataset, validating its effectiveness in complex reasoning tasks. Theoretical Claims: Yes, the theoretical foundation of the paper incorporates the Expected Improvement (EI) from Bayesian optimization theory and random projection methods for dimensionality reduction. These concepts are provided with specific and clear explanations in both the main text and the appendices. Experimental Designs Or Analyses: In terms of dataset selection, recent research indicates that reasoning capabilities demonstrated in mathematical problems can generalize to other tasks. The paper also conducts experiments on more general commonsense reasoning tasks, validating the generalization ability of the proposed method. This dataset selection is therefore reasonable. For model selection, the paper employs Llama-3.1-8B-Instruct、Qwen2-7B-Instruct, and Mistral-8B Instruct as backbone models, avoiding dependence on specific architectures. Regarding baseline selection, the study includes CoT Prompting、Self-Consistency Decoding、FIRE、CoT-Decoding、multi-path generation and RAP (Monte Carlo Tree Search), covering both mainstream and state-of-the-art methods. The experimental section first evaluates the overall performance of the method on complex reasoning tasks. Under zero-shot and few-shot settings, it compares the proposed approach with strong mainstream baselines on three mathematical reasoning datasets and one commonsense reasoning dataset. Significant accuracy improvements demonstrate that controlled embedding exploration outperforms existing baselines in accuracy, diversity, and efficiency. The paper then conducts interpretability analysis by comparing neuron activation rates in MLP layers across different iterations, illustrating the effectiveness of Bayesian iteration. Additionally, ablation studies validate the contributions of the objective function and dimensionality reduction techniques. Supplementary Material: No. Relation To Broader Scientific Literature: . Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The main experimental results demonstrate significant improvements compared to baselines. 2. The methodology section is logically structured, guiding readers through the process of exploring the embedding space, Bayesian optimization, the required objective function and iterative procedures, and dimensionality reduction to mitigate inference costs. The appendices provide corresponding and detailed supplementary explanations. 3. The experimental design is rigorous, with sufficient motivation and extensive experimental validation for each component of the method. Weaknesses: 1. While dimensionality reduction alleviates the curse of dimensionality, random projection may lose critical information. Further validation is needed to confirm that the negative impacts of dimensionality reduction are controllable. 2. Critical neurons play a pivotal role in the experiments, but their definition and identification rely solely on statistical-based approaches. The statistical significance of this method lacks theoretical justification, and it would be beneficial to supplement it with relevant prior work as a feasibility rationale. 3. Regarding computational efficiency, readers may expect comparisons of time and space costs rather than solely token count statistics. 4. It would be desirable to include additional details on the verifier’s experimental setup and implementation. Other Comments Or Suggestions: No Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Verifier Setup** Thanks for the comments. Regarding the questions about details of the verifier, **NO** separate or stronger LLM is used for verification (Line 31-33, right column); the same model as the generator is employed (i.e. LLaMA3-8B-Ins, Qwen2-7B-Ins, and Mistral-7B-Ins). - The verifier score $r_{verifier}(y) = \mathbb{1}_{y_v = y}$ is a binary indicator, where $ y_v$ is the model's regenerated answer based on the current question and previous responses. Specifically, $y_v$ is obtained by prompting the same LLM with a verification query: > *"Based on the given question and the previous answers, please provide your analysis and final answer."* The exact prompts used are listed in Appendix B.3. - We also conducted ablation studies (see *Verifier Comparison: Judgment vs. Generation*, Line 427, left column) to compare different verifier strategies and assess their impact on performance. This setup ensures a fair comparison, as no external or more capable model is used. It also highlights one of the main contributions of this work: our BO framework's ability to enhance reasoning capabilities using a single unified LLM without additional verifiers. Details on the verifier will be added to our paper. **2. Dimension Reduction** > While dimensionality reduction alleviates the curse of dimensionality, random projection may lose critical information. Further validation is needed to confirm that the negative impacts of dimensionality reduction are controllable. We agree that dimensionality reduction may lead to some information loss. However, it offers a practical tradeoff: retaining more information in high-dimensional spaces often results in poor sample efficiency for Bayesian optimisation, making it harder to find good configurations under a limited evaluation budget. As shown in Figure 6, we experimented with several projection dimensions and found that our current setting (d=50) provides a good balance between optimisation performance and computational cost. To assess the stability of random projection, we ran simulations using 50 different random projection matrices. The box chart below shows the distribution of results: ``` 0.850 ┤ ┐ ◀ Max │ │ 0.846 ┤ ┌─┘─┐ ◀ Q3 │ │ │ 0.842 ┤ │ ─ │ ◀ Median │ │ │ 0.838 ┤ └─┐─┘ ◀ Q1 │ │ 0.835 ┤ ┘ ◀ Min ``` The performance remains stable across multiple runs, indicating that random projection does not introduce significant variance or instability. **3. Critical neurons** > The definition of critical neurons relies on statistical methods; citing related work would help justify this choice. We appreciate the reviewer’s concern. Prior works have shown that it is possible to trace information flow within transformers and identify neurons with causal influence on model predictions by applying targeted interventions, such as activation replacement or ablation [1, 2]. Following this line of work, we **mask the critical neurons** identified for each input. This leads to a **significant drop** in accuracy to **13.27%** (from **62.14%** before masking). For comparison, we randomly masked the same number of neurons and repeated the experiment under identical settings, resulting in an average accuracy of **41.89%**. This substantial gap (**41.89%→13.27%**) demonstrates that the identified neurons are indeed functionally important, beyond what would be expected by chance. [1] Damai Dai et al., 2022. Knowledge Neurons in Pretrained Transformers. ACL [2] Kevin Meng et al., 2022. Locating and Editing Factual Associations in GPT. NeurIPS **4. Computational efficiency** > Regarding computational efficiency, readers may expect comparisons of time and space costs rather than solely token count statistics. We thank the reviewer for the suggestion. We supplement the token count statistics with comparisons between our method and RAP on inference time and memory usage. To evaluate the latter, we focus on the two variable components: (1) KV cache, and (2) intermediate activations, since model weights remain constant across methods. Using vLLM’s block-based memory tracking, we report both average and peak usage, sampled at 1-second intervals. |Dataset|Method|Time(min)|Intermediate Activations(avg,MB)|Intermediate Activations(peak,MB)|KV Cache(avg,MB)|KV Cache(peak,MB)| |-|-|-|-|-|-|-| |GSM8K|RAP|184.52|1628.7|1874.4|252.5|568.0| |GSM8K|Ours|**23.15**|**1137.2**|**1178.1**|**176.5**|**312.0**| |GSM-Hard|RAP|234.14|1985.4|2354.9|426.5|574.0| |GSM-Hard|Ours|**28.42**|**881.2**|**1096.2**|**254.6**|**336.0**| |SVAMP|RAP|142.52|1464.8|2089.5|384.5|494.0| |SVAMP|Ours|**18.41**|**932.4**|**1393.2**|**185.8**|**296.0**| |StrategyQA|RAP|149.73|1833.5|1935.9|241.4|376.0| |StrategyQA|Ours|**17.44**|**748.0**|**932.4**|**118.0**|**264.0**| The results show that our method's inference time is only 12.30% of RAP's while also consuming significantly less memory, further validating its computational efficiency advantage.
null
null
null
null
A Trichotomy for List Transductive Online Learning
Accept (poster)
Summary: The paper studies the problem called list transductive online learning. That is the learner is given a sequence of instance $ (x_1,\ldots,x_T)\in\mathcal{X}$. In $ T $ round the adversary and the learner then does the following. The adversary picks an outcome $ y_{t}\in \mathcal{Y}.$ The learner is then asked to predict the outcome of $ x_{t},$ by outputing a list $ A_{t}\subseteq\mathcal{Y} $ of size $|A|= L.$ The learner announces the list $ A_{T} $. If the learner output a list $ A $ such that $ y_{t}\not\in A_{t},$ then learner incurs an error. The problem is both studied in the realizable case: there exists some hypothesis class $ \mathcal{C}\subset \mathcal{Y}^{\mathcal{X}} $ such that for the sequence $ (x_{1},y_{1}),\ldots,(x_{T},y_{T}) $ there exists $ c\in \mathcal{C} $ such that for every $ i\in[T] $ $ x_{1}=c(y_{1}),$ where the algorithm performance is measured in terms of how many mistakes it makes over the $ T $ round. and the agnostic case, where there is no assumption on the relation between $ x_{i},y_{i} $ and $ \mathcal{C},$ and the algorithms performance is measured in terms of the regret the difference between how many mistakes the algorithm makes and how many mistakes the best hypothesis in $ \mathcal{C} $ makes. The paper uses $ Q=(\mathcal{X},L,\mathcal{Y},\mathcal{C}) $ as a description for an instance of a list transductive online learning problem. Given such a $Q$, the paper considers two complexity measures. The first complexity measure is the level constrained $(L+1)$-little stone dimension, the depth of the largest percept balanced $ (L+1) $ tree, with nodes labelled by instances $ x\in \mathcal{X} $, where for each level of the tree the nodes in that layer has the same value $ x $ and each of the nodes $ (L+1) $ edges connecting it to its children, are labelled with different values in $ \mathcal{Y},$ furthermore the tree is realizable by $ \mathcal{C},$ that is each root to leaf path can be realized by and hypothesis in $ \mathcal{C} .$ This is denoted $ D(Q) $ The second complexity measure is the level constrained $(L+1)$-branching dimension, which is the largest natural number $ d $ for which there exists a perfect balanced $ L+1 $ tree(realizable by $ C $), with all the levels of the tree having nodes labelled by the same $ x\in \mathcal{X},$ and for each root to leaf path contains $ d $ instance $ x\in \mathcal{X},$ where each such instance's $ L+1$ edges connecting it to it's children are labelled with distinct values in $ \mathcal{Y}.$ This dimension is denoted $ B(Q).$ Where the paper note that $D(Q)\leq B(Q).$ The paper shows that the 2 complexity measures characterizes the learnability of list transductive online learning, in the realizable case. More specifically the paper shows that if both of the dimensions are unbounded then for any learning algorithm there exists and instance where the learner get $ \Omega(T) $- mistakes. If $ B(Q) $ is unbounded, and $ D(Q) $ is bounded then the there exists an learning algorithm making at most $ O(log(T)) $ mistakes and for any learning algorithm there exists an instance where the learning algorithm makes $ \Omega(log(T)) $-mistakes. If $ B(Q) $ is bounded then the learning algorithm makes $ O(1) $-mistakes. In the agnostic setting the paper shows that the regret is sub linear if $ D(Q) $ is bounded, and linear if $ D(Q) $ is unbounded. Claims And Evidence: Q: Are the claims made in the submission supported by clear and convincing evidence? If not, which claims are problematic and why? A: The paper contains proofs for the lower bounds of the realizable case in the main of the paper. The paper claims that the remaining proofs are devoted to the appendix, which i have not checked. Methods And Evaluation Criteria: Q: Do proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand? A: I dont see how to answer the above for this theoretical article, let me know if im mistaking and give me and example how to evaluate this question, and I will do that. Theoretical Claims: Q: Did you check the correctness of any proofs for theoretical claims? Please specify which ones, and discuss any issues. A: I read the proofs of the main, lower bounds for the realizable, but could have missed something while reading them. Here is my understanding of the proofs, I encourage the authors to correct my understanding. If the $ D(Q) $ dimension is infinite, then for a given algorithm $A$ the lower bound follows from there existing a $L+1$ perfectly balance tree, with depth $ T $ , realizable by $ C $, on each level the nodes are labelled by the same instance $ x\in \mathcal{X},$ and the $L+1$ edges to the nodes children have different labels. Thus when the learner is presented with the instance $ x_{i} $ on level $ i,$ and outputs a list of size $ L,$ the adversary can pick at least one branch of the tree where the learner mispredicts, since the tree have depth $ T $ the learner makes an mistake in each round. Since $ D(Q) $ is infinite, for any $ T $ such a tree can be constructed. In the proof of the lower bound of $ B(Q)$ being infinite, the guaranteed is that for any $ d $ there exists a realizable $ L+1 $ perfectly balanced tree, with the nodes on each level labelled by the same $ x\in \mathcal{X},$ and for any root to leaf path contains $ d $ nodes which edges to their children are label with distinct values from $\mathcal{Y}.$ Thus presenting the learner with the sequence of instance from each of the levels of the tree $ x_1,\ldots $, the adversary will again be able to make the learner make $ d $ mistakes, by on the instances where there are $ L+1 $ distinct labelled edges to children choosing the one not outputted by the learner. However since the above guarantee do not say anything about the depth of the tree and when these $d$ distinct labels occur in the sequences, it is not given that one can find a sequence of length $ T $ with this property. Thus the paper presents a way of compressing such a tree in to a subtree of depth at most $((L+1)^d-1)/L,$ while still having the above property of each root to leaf path having $d$ nodes with distinct values on the edges to their $ L+1 $ children. Now solving $ T= ((L+1)^d-1)/L$ implies $ d= \Theta(\log(T))$(omitting dependencies on $ L $ ), thus the adversary can generate this tree of depth at most $ T $ and make the learner make $ \Omega(\log(T)) $ mistakes. Experimental Designs Or Analyses: Q: Did you check the soundness/validity of any experimental designs or analyses? Please specify which ones, and discuss any issues. A: I dont see how to answer the above for this theoretical article, let me know if im mistaking and give me and example how to evaluate this question, and I will do that. Supplementary Material: Q: Did you review the supplementary material? Which parts? A: No. Relation To Broader Scientific Literature: Be specific in terms of prior related findings/results/ideas/etc. \item A: Im not that well versed in this literature - but the cited literature seemed relevant. Specifically the paper: 1) Relates to work by for instance (Charikarm, Pabbaraju, 2023), which studies list learning in the PAC-setup which allows, and explains how the dimension studied in that work being finite is not sufficient for characterize transductive online list learning. 2) Explains the relation to online learning(uncertainty on both labels and instances) and how that setting have been studied in list (Moran et al., 2023), and how it previous (initiated by (Ben-David et al., 1997)) have been studied how removing the uncertainty of the instance (so knowing the sequence in advance).(I assume that the results of (Moran et al., 2023) does not imply the results in this paper, if this is the case i encourage the authors to correct me.) I had one question do list learning originates from the work of Brukhim et al 2022 else I think it would be good to cite the paper originally introducing it, my apologies to the authors if I missed it. Essential References Not Discussed: Except for might adding a reference about where list learnability originates from i don't see any missing references. Other Strengths And Weaknesses: Strengths: I would say that the paper is well written. From my understanding of the paper being the first to study this setting of list learning it also seems original. The paper combines ideas from previous work. I enjoyed the paper. The paper also leaves an interesting question at the end of the article about whether the lists could be allowed to be unbounded, allowing for instance to output intervals of a certain size instead. Other Comments Or Suggestions: Congratulations on your paper. The following is the notes I took will reading the article. Line 133 namely $(x_1, x_2, . . . , x_T)$ and reveals it to the learner. Line 152 was Q, defined be mentioned here? Remark/note I really liked the open question in the end of the article discussing the possibility of list learning with infinite size - to this end could it make since to include the parameters $ L $ and $ d $ in the statement of theorem 3.1., such that the reader in general could think of how to improve the dependencies in the bound?(I would still keep theorem 1.1 informal) LIne 184 should it be L(Q) and not LD(Q)? Line 200 second column why $ -\infty $ included in N? Line 235-238: What about measurability in the agnostic setting? Also line 307-311: Lemma 4.1 in Cesa-Bianchi is from my understanding for finite label spaces would this course any measurability problems. 248: Initially, an adversary chooses a sequence of T instances X $\in$ X T and reveals it to the learner. Definition 2.2 was $ A^* $ defined prior? 265-269 second column: Maybe adding the reference to Lemma 4.1 in (Cesa-Bianchi Lugosi, 2006), here also, if it also applies in this case, would help the reader. 326-329 second column: and every root to leaf path contains at least d nodes with edges to its children labeled by distinct elements of Y. And the tree is shattered by C. line 538: should it be $ V $ in the defintion of $ V_{x\rightarrow y}.$ Questions For Authors: 1: Line 378-380 second column: How is the tree join such that realizability, nodes on same level has the same label, and is a subset of the nodes in any tree witnessing $ B(Q)=d $? 2: Agnostic and realizable: $ D(Q) $ also characterizes learnability in the realizable setting, if finite the mistake bound is sub-linear in $ T $ and if infinite linear in $ T,$(so learnable if and only $ D(Q)<\infty $ ) however the learning rates is more fine grainedly characterized depending on the finiteness of $ B(Q),$ Theorem 3.1. For the agnostic setting theorem 4.1 the statement is only about learnability which again is characterized by $ D(Q) $ bounded or unbounded. Do finiteness of $ B(Q) $ in the agnostic setting also give a more fine grained characterization of the learning rate? 3: Is it correctly understood that the algorithm 1 and 2, is not nessarily efficent since they have to consider all versionspace for labels in $ \mathcal{Y} $, which might be infinite? Thanks for the answers to my questions. I would like to keep my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We really thank the reviewer for dedicating their time to assess our work. In particular, we really thank the reviewer for taking the time to carefully read our paper. We are delighted that the reviewer found that our contribution is original, our paper is well written, enjoyable to read, and contains an interesting open problem. We will make sure to correct the typos and incorporate minor suggestions mentioned by the reviewer for the camera-ready version. Below, we address major comments provided by the reviewer. - Your understanding of our lower bound proofs is fully correct. - As you mentioned, the results of Moran et al., 2023 do not imply the results in this paper. We have an example to show this in appendix C. In fact, we mentioned that example in the last paragraph of the first column of page 2. - As you mentioned, list learning originates from the work of Brukhim et al., 2022. - Regarding measurability issues, following the work of [1], we only require a sigma-algebra so that every subset of $\mathcal{Y}$ with cardinality $\mathrm{L}$ is measurable. Notably, following the work of [1], this assumption is enough for the agnostic setting. - Regarding our $\log(T)$ lower bound proof, each of the trees that we want to join can witness B(Q) = d. In fact, each of them can do so even in the restricted version space based on the label of its corresponding outgoing edge of the root node. As a result, they are level-constraint (nodes of each level are assigned to a fixed instance from the instance space), and so on. Now, the main observation is that in trees witnessing the level-constraint branching dimension being equal to some k, we do not require all outgoing edges of all nodes to correspond to distinct labels. Therefore, in each of those trees, we may add a level, correspond all of its nodes to an instance from an instance space, and label all outgoing edges of it with the same label. Note that this modification still leads to a tree witnessing B(Q) = d in the restricted version space. Thus, we can make all trees have exactly similar levels by paying the number of levels less than or equal to $L \times$ maximum initial depth of them. That is why we have a factor of list size here. - The finiteness of B(Q) would imply a *slightly* improved upper bound in the agnostic setting. This is because, in the agnostic setting, we have $\sqrt{T}$ factor, so a logarithmic improvement that can be achieved using the finiteness of B(Q) is not a big deal. - Regarding the computational complexity of our algorithms, we note that our algorithms require calculating the (L + 1)-level-constrained Littlestone dimension or (L + 1)-level-constrained branching dimension for concept classes. In the special case of the binary classification, the (L + 1)-level-constrained Littlestone dimension equals the VC dimension. Moreover, we know the calculation of the VC dimension is computationally hard for general concept classes. Thus, our algorithms are not efficient for general concept classes. However, this is an issue for both PAC and adversarial online learning, even for binary classification. For instance, in the case of adversarial online learning, SOA involves computing the Littlestone dimension of concept classes defined by the online learner in the course of its interaction with the adversary, which is challenging computation, even when the concept class and the set of features are finite [2]. Notably, no efficient algorithm can achieve finite mistake bounds for general Littlestone classes [3]. We hope this rebuttal has clarified your questions. Finally, once again, we thank the reviewer for taking the time to carefully read our paper and provide many helpful suggestions. [1] S. Hanneke, S. Moran, V. Raman, U. Subedi, A. Tewari. Multiclass Online Learning and Uniform Convergence. In Proceedings of the 36th Conference on Learning Theory, 2023. [2] P.Manurangsi, A. Rubinstein. Inapproximability of VC dimension and littlestone’s dimension. 30th Conference on Learning Theory, 2017. [3] A. Assos, I. Attias, Y. Dagan, C.Daskalakis, M. K. Fishelson. Online learning and solving infinite games with an erm oracle. 36th Conference on Learning Theory, 2023.
Summary: The authors provide theoretical analysis on the list transductive online learning problem in this paper. They first establish upper and lower bounds for the minimax number of mistakes in the realized setting, by which they solve a open problem raised from previous work. Then, in the agnostic setting, they provide an upper bound for the the minimax expected regret and solve another open problem. The key contribution in their proof is introducing two new combinatorial complexity dimensions named Level-constrained Littlestone dimension and Level-constrained Branching dimension. Finally, they raise the issue of eliminating factors from their upper bound in the realizable setting and left it for future work. *** **Update after Rebuttal** Thanks the authors for their responses. I encourage the authors to include the discussions about these related works in the final version. Currently, I have no other concerns. I have raised my score to 4. Claims And Evidence: All claims in this paper are theoretic results on the minimax number of mistakes in the realized setting and the minimax expected regret in the agnostic setting. They are supported by rigorous proofs. Methods And Evaluation Criteria: The authors do not conduct any experiments in this work. Theoretical Claims: I roughly browsed through the entire proof process in this paper. Since I am not so familiar with the theory of online learning, I find it difficult to accurately determine whether there are issues in the proof. Experimental Designs Or Analyses: The authors do not conduct any experiments in this work. Supplementary Material: The authors do not provide any supplementary material. Relation To Broader Scientific Literature: The key contribution of this work is introducing some novel combinatorial complexity dimensions named Level-constrained Littlestone dimension and Level-constrained Branching dimension, which could bring new insights for both online learning community and transductive learning community, particularly for those who working on theory. Since transducitve learning setting is also widely adopted in graph learning area, for example in the node classification task, this work could also bring some insights to graph learning community. Essential References Not Discussed: From my view, I could not find related works that are essential to understanding the key contributions of the paper, but are not currently cited or discussed in the paper. Other Strengths And Weaknesses: The strengths of this paper is introducing novel techniques to establish theoretic bounds for the minimax number of mistakes in the realized setting and the minimax expected regret in the agnostic setting, by which two open problems are solved. The results in this paper provide learning guarantee for model in the transducitve online setting. The weakness in this paper is that the presentation needs further improvement. I encourage the authors to provide more explanations for some key concepts, such as the Level-constrained Littlestone dimension and the Level-constrained Branching dimension, by giving some examples or visualized illustrations. Besides, it would be better to place Section 1.2 after Section 2, since the readers need some background to understand the meaning of notations appear in Section 1.2. Other Comments Or Suggestions: The letter $T$ in the article is in regular font in some places (Definition 2.5), while in others it is italicized (Theorem 1,1). It's better to unify them. Questions For Authors: 1. Could you elucidate the difference between the transductive online elarning and the traditional transductive learning introduced in [1,2,3,4,5]? 2. In addition to introducing new combinatorial complexity dimensions, is there other novelty in your proof compared with previous studies? Since you claim that two open problems are solved, why can't the previous proof techniques solve these problems, while your proof can address them? [1] Estimation of Dependences Based on Empirical Data: Empirical Inference Science. Vladimir Vapnik,1982. [2] Statistical Learning Theory. Vladimir Vapnik, 1998. [3] PAC-Bayesian supervised classification: The thermodynamics of statistical learning. Olivier Catoni, 2007. [4] Combining PAC-Bayesian and generic chaining bounds. Jean-Yves Audibert and Olivier Bousquet, JMLR 2007. [5] Explicit learning curves for transduction and application to clustering and compression algorithms. Derbeko et al., JAIR 2004. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. We are delighted that the reviewer found that our paper contains novel techniques and novel combinatorial complexity measures. We will make sure to correct the typos and incorporate minor suggestions mentioned by the reviewer for the camera-ready version. Below, we address major comments provided by the reviewer. - We appreciate the reviewer's feedback regarding improvement of the presentation of our paper. We will change notations in section 2.3 to clarify the definitions in the camera-ready version. Also, we will add a figure to clarify distinction between different combinatorial structures that we have in the camera-ready version. - While there is a conceptual connection between transductive online learning and traditional transductive learning, as noted by Hanneke et al., 2023, there is an important distinction. Generally speaking, in online learning, we make no probabilistic assumptions regarding the data-generating mechanism. However, in traditional transductive statistical learning, we usually assume that we have an underlying data distribution, or that the learner observes labels for a uniform-random subsample of the data (as opposed to predicting online in the order given). - As we attempted to explain in the final paragraph of page 3, a technique inspired directly by the Halving algorithm does not yield a logarithmic \log(T) upper bound in our setting, even when the label space is finite. This stands in contrast to the multiclass setting, where this technique is effective when the label space is finite Hanneke et al., 2023. The main novelty in our proof of the trichotomy result lies in extending the shattering technique of Hanneke et al., 2024 to the list setting. In particular, a key and novel component of our algorithm is a new notion of shattering that exploits the sequential nature of list transductive online learning. This result appears fully in the appendix, which serves as the main technical contribution of this work. On the other hand, the solution to the second open problem follows, more or less, from a recent idea in the field. We hope this rebuttal has clarified the novelty of our contribution beyond the introduction of new combinatorial complexity dimensions.
Summary: This paper tackles the combined problem of Moran et al.'s (2023) *list online classification* and Hanneke et al's (2024) *transductive online learning* (where the sequence of instance points is given in advance). Two natural variants of Littlestone dimension are proposed combining the (L+1)-Littlestone trees of Moran et al (2023) with the level-contrained (/-branching) trees of Hanneke et al. (2024). These two combinatorial dimensions exactly define the 3 possible mistake rates in the realizable setting (constant, logarithmic, and linear) and the first determines whether sublinear regret is possible in the agnostic setting. Claims And Evidence: Correct proofs. Methods And Evaluation Criteria: N/A. Theoretical Claims: All proofs are pretty standard and correct. See below for more comments. Experimental Designs Or Analyses: N/A. Supplementary Material: Proof techniques are standard and correct. Relation To Broader Scientific Literature: All good. Essential References Not Discussed: All good. Other Strengths And Weaknesses: Please cite Moran et al 2023 in the Def. 2.11-2.13, otherwise it might seem like you came up with these notions. Similarly maybe cite Hanneke et al (2024) for Def. 2.14, 2.15 and say that you extend their notions. Other Comments Or Suggestions: This paper continues an interesting line of work on variants of online classification. While the proof techniques are rather standard (mostly a natural combination of Hanneke et al (2024) and Moran et al (2023)), this paper should be interesting for the theoretical ICML community. While the paper is rather incremental, I vote for acceptance. Minor comments: Some notation is not defined. E.g. $\Pi(..)$ presumably for distributions. Also $\mathcal{A}$ is used for subsets of labels (in the def. of $\mathcal{Y}_L$ and as the set of all deterministic algorithms. Any reason you use $\mathfrak{s}$? Typically $\mathfrak{s}$ denotes the star number in such contexts. Typo: A $L$-ary --> An Consider adding an $L$ or $L+1$ index to $D(\mathcal{Q})$ and $D(\mathcal{Q})$ to make the dependence clearer (as in Moran et al (2023)? Similarly $L(\mathcal{Q})$ could be misleading as $L$ is also the list size. Why not e.g. $k$ for list size (like in some other papers on list learning)? Questions For Authors: Is the assumption of $|y^\star|<|x^\star|$ explicitly used anywhere? It's not a big restriction but seems unnecessary. E.g., why would you demand $y_1=y_2$ for $T=2$. Even in cases with $|y^\star|=|x^\star|$ you can obviously still "learn", e.g., if only one hypothesis is consistent to a prefix of the sequence, the learner can predict everything following the prefix correctly, even if each $x_i$ has its own new label. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that the work is technically correct. We are delighted that the reviewer found that our paper is interesting for the theoretical ICML community. We will make sure to correct the typos and incorporate minor suggestions mentioned by the reviewer for the camera-ready version. Below, we address a question provided by the reviewer about the assumption of $|y^{\star}| < |x^{\star}|$. - No. In fact, we use that notation to formally define Deterministic List Transductive Online Learning Rules. In particular, a Deterministic List Transductive Online Learning Rule is a mapping that maps each finite sequence of instances and a finite sequence of labels with a size *smaller* than the size of the sequence of instances to a set of size L of labels. We will change the notations in this part to clarify the definitions in the camera-ready version.
Summary: They studied the problem of list transductive online learning. In the realizable setting, they show a trichotomy of possible rates of the minimax number of mistakes. In the agnostic setting, they show a \tilde{O}(\sqrt{T}) regret bound. Claims And Evidence: Theoretical paper, and they have proved everything theoretically. Methods And Evaluation Criteria: yes, theoretical paper Theoretical Claims: I checked the proofs in the main body and they do make sense. Experimental Designs Or Analyses: No experiments Supplementary Material: I did not check the appendix. Relation To Broader Scientific Literature: They study the problem of list transductive online learning. List learning studied both experimentally and theoretically. It is relevant to multi-class classification and conformal prediction. Essential References Not Discussed: I cannot think of any missing references. Other Strengths And Weaknesses: The paper is written nicely (I was confused in some parts and asked my question below), and they solved multiple open problems in earlier papers. I am not too surprised by the techniques, most of them are standard techniques in online learning literature I think (I did not go over the appendix), but the application is novel . Other Comments Or Suggestions: Check the questions. Questions For Authors: Questions: For the concepts in the concept class, do you assume that they give one label y_i to each example x_i or a list of labels? Def 2.10. I am confused here. usually in L trees each root-leaf path is labeled by a hypothsis, here each node is labeled by a hypothesis, why is it the case here? Def 2.11, usually in Ltree, each branch is labeled by a distinct y, here I don't think you mean that |Y|=L+1(and so all possible labeles are used in different branches), can you clarify on this? Def 2.14, this is exactly Littlestone tree, no? Or is it possible that |Y|>L+1 and that's the difference with Littlestone tree? Def 2.15, So a level constrained (L+1)-branching D in some sense is a subtree of a (L+1)-Littlestone tree. In section 2.3, I was very confused, perhaps you can add some pictures and explain which one is littlestone tree and for the ones that are not a standard littlestone tree what is the crucial difference that the reader needs to pay attention to. Line 385-386 (the first line in this column), can you explain why (L+1) is being multiplied? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that our work is technically sound. We are delighted that the reviewer found that our paper contains novel applications of the techniques in the literature, and moreover mentioned that our paper is written nicely. Below, we address the questions provided by the reviewer. - We assume that each concept from the concept class assigns one label from the label space to an instance from the instance space. - In Definition 2.10, we just assign symbols to nodes and edges of a Perfect Rooted L-ary Tree from two abstract spaces. - In Definition 2.11, we just replace abstract spaces in Definition 2.10 with instance space and label space. Moreover, we may apply the property of having distinct labels corresponding to all outgoing edges of any given node in a tree in the definitions of dimensions, such as Definition 2.13. Notably, as you mentioned, we do not require to have |Y| = |L + 1|. - Definition 2.14 is the Definition of Level constraint (L + 1)-Littlestone dimension. Here, there are two main differences with the Definition of standard Littlestone dimension. First, the witnessing tree is (L+ 1)-ary tree instead of 2-ary tree. Second, all nodes at the same level of the witnessing tree should correspond to a single instance from the instance space. - In a tree witnessing the Level-constrained (L + 1)-Branching Dimension, we may have outgoing edges of a node corresponding to the same label. This contrasts a tree witnessing the (L + 1)-Littlestone Dimension that all outgoing edges of any given node should correspond to unique labels. - We appreciate the reviewer's feedback regarding the improvement of the presentation of our paper. We will change the notations in section 2.3 to clarify the definitions in the camera-ready version. Also, we will add a figure to clarify distinction between different combinatorial structures that we have in the camera-ready version. - Let us answer your question when L + 1 = 2 for simplicity. Then, the extension for an arbitrary L is straightforward. Intuitively, suppose you have two level constraint trees of the same depth. If you want to join these two trees by adding a root node and still keeping the level constraint property for the new tree, your new tree can have at most 2 times the depth of those two initial trees.
null
null
null
null
null
null
Random Registers for Cross-Domain Few-Shot Learning
Accept (poster)
Summary: In this work, the authors propose a two-stage learning framework namely REAP to tackle CDFSL problem. During the source domain training, REAP randomly masks the most discriminative region and fill the erased region by random prompts, then optimize the pretrained ViT on source data. And during target domain fine-tuning, REAP optimizes the learnable prompt to adapt to target data. The proposed REAP achieves comparable performance among all methods on multiple benchmarks. Claims And Evidence: The claim is relative weak, previous work [1] manages to discuss some similar experimental results regarding learning prompts on source dataset and direct evaluation on target datasets, but prompt tuning is enough to tackle this issue. [1] Learning to Prompt for Vision-Language Models, IJCV Methods And Evaluation Criteria: Though the authors discuss the problem from sharpness aware minimization aspect, the proposed method is still a vanilla pretraining on source and prompt tuning on target paradigm. The reviewer still doubt the novelty of the proposed method. Theoretical Claims: The theoretical claim is also weak. The perturbation ϵ is relative small and ensures that ω+ ϵ is nearby ω in the loss landscape. However, the proposed method introduce too many random registers to construct learn more domain-agnostic information, which is conflict with original SAM hypothesis. Experimental Designs Or Analyses: The experimental results are not sufficient, some important analysis is lacked. For example, the ablation study of random erasing strategy (i.e., simply masking but not using random registers to fill), initialization methods of random registry, and comparison between the proposed method and VPT. Supplementary Material: The reviewer has read the supplementary material. The useful parts include: Similar Impact on deep prompts and Adapting the Random Registers to different std. Relation To Broader Scientific Literature: The proposed method largely related to few-shot learning and prompt tuning, but has limited contribution to future work of these topics. Essential References Not Discussed: [1] Visual Prompt Tuning, ECCV 2022 Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: See all above weakness for details. Questions For Authors: We recommend the authors to clarify the different between VPT (with backbone pretraining) and the proposed method. The proposed method has large similarity with VPT, which may limit the novelty of proposed method, though this work try to investigate the performance issue from SAM aspect. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your suggestion. ## **1. Claims** [1] studies the effectiveness of **prompt learning in in-domain data**, while we specifically target **extreme cross-domain shifts** (e.g., natural images to satellite images), where standard prompt tuning fails. As shown below, REAP outperforms [1] by **+30%** under 5-shot source domain pretraining settings and by **+9%** under target domain finetuning. To further validate REAP’s effectiveness, we conducted extensive comparisons with **prominent prompt tuning methods** (e.g., VPT, CoOp, MaPLe) across extreme cross-domain benchmarks. | Source-domain training | Cropdiseases | EuroSAT | ISIC | ChestX | Ave. | | ---------------------- | ------------ | --------- | --------- | --------- | --------- | | [1]CoOp | 38.48 | 54.15 | 25.74 | 21.36 | 34.93 | | COCOOP | 40.36 | 60.93 | 27.38 | 22.47 | 37.79 | | MaPLe | 35.28 | 50.83 | 23.65 | 19.63 | 32.35 | | VPT | 77.92 | 75.93 | 49.89 | 24.30 | 57.01 | | **REAP** | **96.68** | **90.76** | **55.76** | **26.84** | **67.51** | | Target-domain finetuning | Cropdiseases | EuroSAT | ISIC | ChestX | Ave. | | ------------------------ | ------------ | --------- | --------- | --------- | --------- | | [1]CoOp | 92.82 | 84.92 | 42.93 | 22.83 | 60.88 | | COCOOP | 90.57 | 81.32 | 42.15 | 21.89 | 58.98 | | MaPLe | 93.07 | 89.35 | 46.56 | 23.16 | 63.04 | | VPT | 94.80 | 89.48 | 46.35 | 26.40 | 64.26 | | **REAP** | **98.35** | **92.64** | **58.28** | **29.21** | **69.62** | We can see existing prompts optimize for *in-domain* adaptation and harm the performance when facing the huge domain gap, while REAP’s **random register perturbation** explicitly **alleviates the domain gap**. Similar results in Fig.1, Fig.3, and Fig.4 also verify that the regular prompts are not enough for handling the CDFSL task. ## **2. Methodological Novelty** While prior works (e.g., VPT) naively add learnable prompts, we are the **first to find that it harms the transferability** to target domains, and we **theoretically design the random register** and prove it **suppresses domain-specific attention patterns**. Based on it, we propose a novel method, REAP, to enhance perturbations on attention maps. This mechanism is novel, as noted by Reviewer zHqd and UKT5: *"The proposed method is innovative... new in the CDFSL generalization viewpoint."* ## **3. Sharpness-aware minimization** **(1) Performance Validation** The experimental results (Tab. 3) demonstrate significant performance improvements (e.g., **+4.6%** on ISIC), which empirically confirm that the introduced randomness is **well-calibrated and beneficial** rather than "excessive" or detrimental. **(2) Sharpness Validation** The sharpness analysis in **Fig. 3b** explicitly validates that our method reduces loss sharpness by **75%** compared to vanilla VPT, indicating that the random registers **enhance flat minima discovery** without violating SAM’s core principles. This demonstrates that our design **extends** SAM’s hypothesis to cross-domain scenarios rather than conflicting with it. **(3) Perturbation magnitude** Random registers are initialized with a near-zero magnitude (σ=0.01) and adaptively scaled during training. Visualization in **Fig. 5** further confirms that their impact on attention maps is **subtle**, primarily acting as "regularized perturbations" to suppress domain-specific biases rather than overwhelming the original features. This design is consistent with SAM’s small-perturbation premise while addressing cross-domain challenges. In all, our design *extends* SAM to cross-domain settings without violating its core hypothesis. ## **4. Experimental Completeness** The requested analyses **are fully provided in the paper**: **Ablation: Simple Masking vs. Random Registers** - **Table 3a**: Naive masking (no registers) drops accuracy by **5.8%**. **VPT Comparison** - **Fig. 1** has already shown that VPT harms performance during pretraining while random registers improve model transferability. **Initialization Analysis** - **Supp. Fig. 17**: Gaussian initialization of the random registers. ## **5. Clarification on VPT Similarity** While the reviewer notes "similarity with VPT", we **explicitly differentiate**: - **Motivation (Fig.1)**: We are the **first to find** that **VPT fails under extreme domain gaps** due to *absorbing domain information*, and **propose random registers** to solve it. - **Solution**: We come up with **REAP** to replace *fixed learnable prompts* with **random registers** during source training. - **Result**: REAP outperforms VPT by **4%** on distant domains.
Summary: This paper deals with the Cross-domain few-shot learning (CDFSL) problem, which needs to tackle the huge domain gaps. Existing methods utilizing learnable prompts might learn domain specific information of the source domain, while fail to generalize to the distant target domains. This paper proposed to leverage random prompts as an effective method to solve this issues. Furthermore, the random prompts are well interpreted with the Sharpness-Aware Minimisation (SAM) and analysis. Based on the random prompt idea, Registers Enhanced Attention Perturbation (REAP) is proposed to pertube both the image tokens and adding random noisy tokens at source-domain stage, which further increases the generalization to target domains. Experiments are conducted on the standard CDFSL benchmarks. The propposed method consistently outperforms state-of-the-art methods. Comprehensive ablation study and parameter analysis are conducted. ## update after rebuttal The random prompt idea is new and novel in the context of CDFSL. The explanation using Sharpness-Aware Minimisation (SAM) is reasonable and insightful. No further concern remains after rebuttal. Therefore, I will keep my score as Accept. Claims And Evidence: Yes. The main claim is random prompts can improve the generalization to target domains, while the learnable prompts impede the generalization. This claim is well interpreted and verified experimentally and analytically with reasonable formulation of Sharpness-Aware Minimisation. Methods And Evaluation Criteria: The proposed method is innovative, which consists of perturb the attention of image tokens, and additional random register tokens. The proposed method and perspective is new in the CDFSL generalization viewpoint. Evaluation benchmark and criteria are standard. Theoretical Claims: No proof in this paper. Experimental Designs Or Analyses: The experiments are sound and comprehensive, which involves SOTA comparison, ablation study of the proposed components, and analysis of important parameters of the model such as number of tokens, ratio of pertub, etc. Moreover, visualization comparison is also presented for random vs. learnable tokens. Supplementary Material: Yes. Supp. Material provides necessary background to dataset, Sharpness Aware Minimisation, and more results. Relation To Broader Scientific Literature: Prior findings leverage prompt tuning for downstream task finetuing, i.e. learnable tokens. This usually works for near domain generalization or relevant tasks. However, CDFSL tackles the distant domain with huge domai gap. This paper challenged the learnable token method and proposed a new random register method for CDFSL. What's more, the interpretation perspective is novel. It attributes the effectiveness of random register to Sharpness Aware Minimisation, building theoretical support for random register. The proposed idea is an extension and further exploration of Registers in (Darcet et al., 2024). Essential References Not Discussed: Yes. Other Strengths And Weaknesses: Details are provided in above section. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough and insightful review of our submission. Your recognition of our work’s **innovative integration of random prompts with SAM principles** and its **practical value in addressing extreme domain gaps** is deeply encouraging. We will keep on polishing our paper in the final version. Thank you again for your appreciation!
Summary: Based on an intriguing observation that prompt tuning could be harmful for the generalization of ViT and the related analysis, this paper develops a novel solution for cross-domain few-shot learning method by replacing some clustered patches with random registers. Extensive experiments on four datasets demontrate the effecitiveness and superiority of the proposed method. ## Update After Rebuttal I have checked the authors' rebuttal, and found most of my concerns have been solved, so I choose to keep my score as Weak accept. Claims And Evidence: The claims in this paper are well supported by the related analysis and experimental results. However, I have the following questions: 1. In Figure 5, we can see that adopting learnable registers can make model concentrate on regions irrelevant to the object, and random registers will guide the model's attention to the object. Does this mean that the learnable registers are useless or even have negative effects on the classification in the **source domain**? Is this observation conflict with the observation in the previous work [1]? 2. Additionally, based on my own knowledge, the object-focused attention obtained in the **source domain** may be not a good indicator of better generalization ability, since it means that the model may capture more object related high-level semantic information, which can be hardly trasferred to the **target domain**. So could the authors provide more discussions of this observation to solve my confusion? [1] Darcet T, Oquab M, Mairal J, et al. Vision Transformers Need Registers[C]//The Twelfth International Conference on Learning Representations. Methods And Evaluation Criteria: The proposed method is reasonable and easy to understand. The effects of all core components are studied by the experiments and ablation studies through well defined evaluation criteria. Theoretical Claims: The theoretical claims are inspired by the previous works and I have confirmed the correctness. I just have one small question: 1. The authors evaluate the loss sharpness of different methods by adding Gaussian noises perturbations to the attention map as in Eq. (4). My question is why only choose the attention map for this study? In my opinion, the registers introduced at the input layer will influence all parts (e.g., layer normalization or feed-forward network) of the subsequent Transformer blocks, not just the attention map. Could the author give some explanations for such choice? Experimental Designs Or Analyses: The experiments and ablation studies are adequate, the related discussions and analyses are reasonable. Despite this, I have some concerns about the experimental results: 1. As a cross-domain method, it is better to achieve higher classification accuracy for the target domain samples **without** severely sacrificing performance on the source domain. Moreover, Figure 5 shows the random registers may lead to better results on the source domain. So could the authors compare the results of the source miniImageNet dataset of different methods for validating the claim of Figure 5? 2. Could the proposed method be generalized to a more complex and diverse dataset, namely Meta-dataset which is a widely used benchmark for cross-domain fow-shot, under a more difficult varied-way varied-shot setting? 3. It is better to perform comparison about the training/inference time and parameter size to show the effciency of the introduced random registers. Supplementary Material: I have carefully checked all the parts of the supplementary material. Relation To Broader Scientific Literature: The key contributions of this paper may be related to many research areas, including domain generalization, domain adaptation, transfer learning, etc, and many practical applications, such as the few-shot classification for medical, remote sensing, or agriculture images. Essential References Not Discussed: The related works are appropriately cited. Other Strengths And Weaknesses: Strengths: 1. This paper is clearly presented with a good organization, the method is well motivated and easy to understand. 2. The observation is interesting and the discussions seem to be reasonable. 3. The contributions provide a new insight for understanding the domain generalization of ViT model. Weaknesses: Please refer to the questions in "Claims And Evidence", "Theoretical Claims" and "Experimental Designs Or Analyses". Other Comments Or Suggestions: It may be better to enlarge the figures to make the texts in them more readbale. Questions For Authors: Please refer to the questions in "Claims And Evidence", "Theoretical Claims" and "Experimental Designs Or Analyses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s constructive feedback. Below are detailed responses to your questions: ## **1. Clarification on Learnable Registers (Fig.5)** ##### (1) Source-domain performance and visualization | Model | Source-domain | Target-domain | | ------------------ | ------------- | ------------- | | Baseline | 97.83 | 64.07 | | Learnable register | 97.87 | 63.17 | Learnable registers achieve slightly **higher source-domain accuracy** by **exploiting patterns (over)fitting the source domain**, so they can suffer in cross-domain generalization. For example, in Fig.5, birds are often seen with branches in the source domain, and the learnable register focuses on the bird and the branch (domain-specific contextual cues), which is the pattern that is only useful in the source domain (i.e., **overfitting to the source domain**) but may be useless in target domains. In contrast, random registers force attention to the bird itself, with less overfitting to the source domain, and thus benefit the target-domain generalization. Our visualization is designed to verify whether the model captures such domain-specific (object-irrelevant) patterns, the generalization is also quantitatively verified by the domain similarity in Fig.4 and sharpness in Fig.3. ##### (2) Relation to [1] **[1] focuses on the in-domain training,** utilizing registers to capture the global information in the in-domain data, thereby reducing the outlier value in attention maps. In contrast, **our method focuses on the generalization to target domains**, by resisting the overfitting to in-domain data. Our observation is not contrary to the effect of learnable registers in resisting the outlier value in attention maps and is consistent with [1] in finding learnable registers can absorb in-domain information. We take a step further to identify such in-domain information as domain-specific information, which is further handled by our random registers. ## **2. Sharpness Evaluation on Attention Maps** We choose the attention map for the sharpness experiments because registers majorly interact with other tokens through the self-attention mechanism. In other parts of ViT, such as FFN, each token is processed separately and, therefore can hardly reflect the influence of registers. To verify it, we use LayerNorm and FFN perturbations to report the sharpness, and we can see **no significant influences** (Δ < 0.01) compared with the attention map. | Component | Baseline | Learnable register | Random register(Ours) | | :-------- | :------- | :----------------- | :-------------------- | | **Att.** | 1.6 | 2.3 | 0.6 | | **LN** | 0.04 | 0.04 | 0.03 | | **FFN** | 0.06 | 0.07 | 0.06 | *(Lower values indicate flatter minima and better generalization)* ## **3. Source-domain performance comparison** REAP balances **moderate source-domain accuracy** with **significant target-domain gains** as below. A **1.5% source-domain drop** (which is acceptable) enables **+2.68% target-domain gain**, which is a favorable tradeoff for CDFSL. | Method | Source-domain | Target-domain | | ------------------ | ------------- | ------------- | | Baseline | 97.83 | 64.07 | | Learnable register | 97.87 | 63.17 | | Random register | 97.50 | 65.05 | | **REAP(ours)** | 96.33 | 66.75 | | Random-mask | 96.28 | 60.90 | | Cluster-mask | 94.23 | 64.29 | ## **4. Generalization to Meta-Dataset** Due to time and resource constraints, we first pretrain on our datasets(miniImagenet), and then validate on parts of the Meta-Dataset under the 5-way 5-shot protocol below. | Dataset | Baseline | REAP | Δ | | :--------------- | :------- | :---- | :-------- | | **Birds** | 94.23 | 96.82 | **+2.59** | | **Fungi** | 61.03 | 64.39 | **+3.36** | | **VGG Flower** | 89.64 | 90.03 | **+0.39** | | **Traffic Sign** | 60.46 | 62.37 | **+1.91** | | **Ave*** | 76.34 | 78.40 | **+2.06** | ## **5. Efficiency analysis** 1. **Training Time**: REAP introduces **<7% additional training time** compared to the baseline (127.08s vs. 118.98s per epoch), attributable to the lightweight random register sampling. 2. **Inference Cost**: Inference time **remains identical** to the baseline with no additional overhead. Architectural parity ensures seamless deployment in real-world applications. 3. **Parameter Efficiency**: **Only one learnable standard deviation** is added to control register initialization. This demonstrates REAP’s **lightweight design** achieves significant cross-domain gains with near-zero parameter and time penalties. ## **6. Figure Readability Improvement** We promise we will polish our paper in the final version.
null
null
null
null
null
null
null
null
Compositional Flows for 3D Molecule and Synthesis Pathway Co-design
Accept (poster)
Summary: The paper introduces a novel flow matching framework, 3DSynthFlow, for generating synthesizable molecules within protein pockets by sequentially selecting discrete building blocks and simultaneously modeling their coordinates. The authors evaluate 3DSynthFlow against all 15 protein targets in the LIT-PCBA virtual screening benchmark. Claims And Evidence: The claims are adequately supported by the evidence presented in the paper. Methods And Evaluation Criteria: The authors evaluate 3DSynthFlow on targets in the LIT-PCBA dataset using docking as an oracle. The exhaustive validation of the model on several protein targets and its consistent strong performance strengthens the results of the paper. However, the evaluation of the generated molecules for diversity and respective properties is somewhat lacking, discussed below in the Weaknesses section. Theoretical Claims: N/A Experimental Designs Or Analyses: Discussed below. Supplementary Material: Yes, including the sections corresponding to pre-training the compositional flow and state flow model, the action space, and the architecture. Relation To Broader Scientific Literature: The paper builds on previous literature on synthesizable molecule generation with GFlowNets, and represents a step forward in template-based molecular synthesis with a new 3-dimensional flow matching component for atomic coordinates of selected building blocks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - This reviewer notes that the 3D information provided by the state flow model allows for better-informed building block selections with the GFlowNet at intermediate steps during molecule generation compared to 2D methods. - This reviewer agrees that reaction template-based generation constraints are important for practical molecular generation in 3D, and finds the sequential design apt for ensuring synthesizability while retaining the ability to denoise building block coordinates according to their respective local time steps. Weaknesses: - The evaluation of the diversity and chemical properties of generated molecules is insufficient. The authors do not report the diversity of molecules of the 3D co-design model (as measured by average Tanimoto similarity or number of high-scoring modes, etc.) compared to the 2D models or compare these metrics between molecules generated for different protein targets. - The approach seems to require training two separate models, the GFlowNet and the state flow (flow matching) model, which may prove somewhat computationally expensive. - The approach uses synthons rather than template-based molecular synthesis, which aids in the simplicity of combining building blocks. However, they restrict their synthesis process to a brick-and-linker formulation with up to 3 blocks, which limits the explored chemical space to linear molecules. Other Comments Or Suggestions: N/A Questions For Authors: - This reviewer is curious about the exploration space of the enamine synthons and requests that the authors provide an estimation of the state space size similar to that of Fig. 2 of SynFlowNet or Fig. 2 of RGFN. Additionally, the reviewer would like to see the number of unique building blocks explored for experiments, similar to Fig. 7 of SynFlowNet. - This reviewer would like more clarification as to the training details of the state flow model. In the appendix, there is mention of "decomposing CrossDocked molecules" to train the state flow model. However, it is unclear whether the training dataset for the state flow model contains "partial" CrossDocked molecules, which would aid in learning to dock the initial, individual fragments selected by the GFlowNet. This reviewer is also curious as to the success rate in decomposing molecules from CrossDocked into Enamine synthons, and what proportion of molecules succeed or fail to be decomposed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for volunteering their valuable time and providing insightful feedback to our paper. We are addressing their questions one by one in our response below. > W1. Evaluation of the diversity and chemical properties. To further address the reviewer’s suggestion, we now compare sampling efficiency, diversity and other chemical properties of our method (CGFlow) against the 2D baseline (RxnFlow) on the first 5 LIT-PCBA targets. We define high-scoring modes as those with QED > 0.5, Vina < -10 kcal/mol for all pockets except FEN1, which we use -7 kcal/mol, and mode similarity < 0.5. The table below shows the number of unique high-scoring modes identified after sampling 10k molecules: | |ADRB2|ALDH1|ESR_ago|ESR_antago|FEN1|Avg| | - | - | - | - | - | - | - | |RxnFlow|69|97|38|28|116|69.6| |CGFlow|276|417|358|213|358|324.4| CGFlow consistently outperforms RxnFlow in sampling efficiency, discovering 4.7x more diverse modes. Since mode diversity increases experimental success, this improvement highlights the practical advantage of our approach. We further report the full sampling trend for ADRB2 and the average properties of the top 100 diverse modes. Diversity here is computed without similarity-based filtering to avoid artificial inflation. Our results confirm CGFlow's superior efficiency in discovering diverse modes with good QED and Vina scores. |\# of mol explored|1000|10000|64000|Vina|QED|MW|HAC|LogP|Diversity| |-|-|-|-|-|-|-|-|-|-| |RxnFlow|2|69|1448| -11.57|0.67|388.5|28.2|4.25|0.88| |CGFlow|20|276|4323| -12.34|0.69|386.1|27.9|4.38|0.85| We thank the reviewer for their comment, which motivated us to further highlight the strengths of our approach. > W2. Concern about computational cost. We kindly refer the reviewer to our response to Reviewer Z8PT (Weakness 2) for further clarification on computational cost. > W3. Concern about limited search space due to the brick-and-linker formulation. We appreciate the reviewer highlighting this important limitation regarding our synthesis approach. Indeed, as the reviewer pointed out, our current brick-and-linker formulation restricts the explored chemical space to linear molecules by excluding nonlinear reactions, such as ring formation. However, this constraint can be substantially mitigated by expanding the chemical search space through incorporating a larger building block library. For example, V-SYNTHES [1], a pioneering work in exploring Enamine REAL Space using a brick-and-linker strategy, successfully achieved a notable hit rate of 33%. > Q1. Question about the size of state space and unique building blocks explored during training. We estimate the sample space according to the number of synthetic steps: $10^{11}$ molecules with a single reaction step, $10^{17}$ molecules with two reaction steps, and $10^{23}$ molecules with three reaction steps. In our experiments, we employed up to two reaction steps according to Enamine REAL, and the state space size is similar to RGFN (up to 4 steps with 8,350 blocks) and SynFlowNet (up to 3 steps with 200k blocks). Additionally, we analyzed the number of unique building blocks (BBs) explored during training across the first 5 LIT-PCBA targets. Our model explored an average of ~55,000 unique BBs within 1,000 training iterations with a batch size of 64. This demonstrates a significantly broader exploration compared to SynFlowNet, which reported exploring ~15,000 unique BBs during 8,000 training iterations with a batch size of 8. |Target|ADRB2|ALDH1|ESR_ago|ESR_antago|FEN1| |-|-|-|-|-|-| |Number of Unique blocks|$45520\pm7876$|$48644\pm1983$|$55211\pm5611$|$58097\pm8529$|$69400\pm5259$| > Q2. clarification as to the training details of the state flow model. Yes, you are absolutely correct that during training, the state flow model encounters "partial" CrossDocked molecules. Specifically, we decompose each molecule into up to three fragments using 38 bimolecular Enamine synthesis protocols defined by reaction SMARTS. We then randomly sample a fragment ordering, matching the fragment introduction schedule used in the Compositional Flow model (e.g., fragment A at $t=0$, B at $t=0.3$, etc.). A random time step is sampled so that at earlier steps, the model sees "partial" structures. This design enables the state flow model to learn realistic fragment docking conformations aligned with the fragment selection process of compositional flow. Importantly, the decomposition is not intended to recover purchasable Enamine synthons but to expose the model to chemically meaningful substructures for learning protein-pocket conformations. While guided by Enamine protocols, exact synthon matching is unnecessary, allowing us to use any molecules for pose prediction training. We will clarify these points in the revised appendix. Reference: 1. Sadybekov, Arman A., et al. "Synthon-based ligand discovery in virtual libraries of over 11 billion compounds." Nature 601.7893 (2022): 452-459.
Summary: The paper introduces Compositional Generative Flows (CGFlow), a novel framework designed for the generation of compositional objects with continuous features in generative applications, such as synthesis-based 3D molecular design. CGFlow extends flow matching by enabling the generation of objects in compositional steps while modeling continuous states. This is accomplished through a straightforward expansion of the flow matching interpolation process to model compositional state transitions. Additionally, CGFlow builds upon the theoretical foundations of generative flow networks (GFlowNets), allowing for reward-guided sampling of compositional structures. The framework is applied to synthesizable drug design by simultaneously designing both the molecule's synthetic pathway and its 3D binding pose. CGFlow achieves state-of-the-art binding affinity compared to synthesis-based baselines, demonstrated across all 15 targets of the LIT-PCBA benchmark. Further evaluation with PoseCheck indicates that molecules designed using CGFlow exhibit a higher number of key protein-ligand interactions, underscoring the benefits of co-designing 3D molecular structures alongside their synthetic pathways. ## update after rebuttal I keep my original rating. The authors have provided more evaluations in their rebuttal. My main concern is that the improvement of the proposed method over RxnFlow is too marginal while it introduces much more latency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked all theoretical claims and found that they are correct. Experimental Designs Or Analyses: Yes. I also recommend to conduct experiments on CrossDocked2020 or BindingMOAD. Supplementary Material: Yes. I checked the appendix and the codes. Relation To Broader Scientific Literature: The key contributions of the paper mainly relate to RxnFlow and Diffusion forcing. The work combines the ideas from both. The related works have been discussed in the paper. Additionally, I recommend the authors to cite and compare with other references which resemble this work somewhat [1]. [1] Ghorbani, Mahdi, et al. "Autoregressive fragment-based diffusion for pocket-aware ligand design." arXiv preprint arXiv:2401.05370 (2023). Essential References Not Discussed: See above. For a more comprehensive evaluation, the authors could consider comparing their CGFlow framework against structure-based drug design (SBDD) baselines, including references [1,2,3] among others. This would provide a more thorough assessment of CGFlow's performance and robustness in the context of established SBDD methodologies, enabling a clearer understanding of its strengths and potential areas of improvement. [1] Zhang, Z. and Liu, Q., 2023, July. Learning subpocket prototypes for generalizable structure-based drug design. In International Conference on Machine Learning (pp. 41382-41398). PMLR. [2] Zhou, X., Cheng, X., Yang, Y., Bao, Y., Wang, L. and Gu, Q., 2024. Decompopt: Controllable and decomposed diffusion models for structure-based molecular optimization. arXiv preprint arXiv:2403.13829. [3] Qu, Y., Qiu, K., Song, Y., Gong, J., Han, J., Zheng, M., Zhou, H. and Ma, W.Y., 2024. Molcraft: Structure-based drug design in continuous parameter space. arXiv preprint arXiv:2404.12141. Other Strengths And Weaknesses: Strengths: 1. This work introduces 3D information of ligands into RxnFlow-like frameworks, which enable modeling protein-ligand interactions. Weaknesses: 1. Lack of ablation studies: since the time scheduler plays an important role in the proposed compositional flow matching, the related ablation studies are required. Currently, no related ablation studies are provided. What if the denoising process of coordinates have no overlap across different synthons? 2. Lack of comprehensive evaluation: test on more datasets, such as CrossDocked2020 and BindingMOAD; measurement of training efficiency; lack of evaluation in geometrical properties since this paper is related to 3D molecule generation. 3. Lack of baselines: SBDD baselines. 4. The performance is not good: The vina improvement over RxnFlow is marginal, but the degeneration in Success Rate (synthesizability) and synthesize steps is obvious. It is well-known that the ligands with larger molecular weights tend to have better Vina scores. So it is suspectable that the slight improvement in Vina score comes from more synthesis steps. Other Comments Or Suggestions: N/A Questions For Authors: I noticed that the evaluation results in local optimized posed and rocked. Have you evaluated the Vina score directly on the generated poses? Have you checked the RMSD between the generated poses and redocked poses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We highly appreciate this reviewer’s constructive feedback and insightful suggestions. We would like to clarify and address all of these points to the best of our ability in the response below. > W1: Lack of ablation studies about time scheduling of state flow model Thank you for the valuable suggestion. We conducted an ablation study to assess the effect of time scheduling in state flow training, comparing three settings: partial (partial overlap of synthon denoising), no overlap (strictly autoregressive), and till end (all synthons denoised until $t=1$). We compared the average local-optimized Vina docking scores across different training iterations for the ALDH1 target below: | \# of mol explored | 10,000 | 20,000 | 30,000 | |-|-|-|-| | no overlap | $-5.68 \pm 0.29$ | $-6.33 \pm 0.26$ | $-7.02 \pm 0.34$ | | partial | $-6.28 \pm 0.22$ | $-7.28 \pm 0.21$ | $-7.22 \pm 0.12$ | | till end | $-7.15 \pm 0.40$ | $-7.60 \pm 0.29$ |$ -7.79 \pm 0.12$ | CGFlow’s overlapping noise scheduling, where positions are refined as synthons are added, clearly outperforms conventional autoregressive approaches (no overlap). > W2 & W3: Lack of comprehensive evaluation (e.g., CrossDocked2020) and SBDD baselines. Following your suggestions, we evaluated CGFlow on CrossDocked2020 against established SBDD baselines. Using the same conditional objective and proxy setup as TacoGFN and RxnFlow, we generated 100 molecules per pocket in a zero-shot manner without an additional optimizing process for test targets. We varied the reward exponentiation parameter β (Low: U(1,64), Medium: U(32,64), High: U(48,64)) to balance exploitation and exploration for sampling. | |Validity(↑)|Vina(↓)|QED(↑)|AiZyn. Succ Rate(↑)|Div(↑)|Time(↓)| |---|---|---|---|---|---|---| |Reference| - | -7.71|0.48|36.1| - | - | |FLAG|99.7|-7.07|0.49|21.9|0.82|1047| |DecompDiff|66.0|-8.35|0.37|0.9|0.84|6189| |MolCRAFT|96.7|-8.05|0.50|16.5|0.84|141| |MolCRAFT-large|70.8|-9.25|0.45|3.9|0.82 | >141| |TacoGFN|100.0|-8.24|0.67|1.3|0.67|4| |RxnFlow|100.0|-8.85|0.67|34.8|0.81|4| |CGFlow (low β)|100.0|-9.00|0.72|55.0|0.79|24| |CGFlow (med β)|100.0|-9.16|0.73|56.6|0.76|24| |CGFlow (high β)|100.0|-9.38|0.74|62.2|0.66|24| CGFlow reduces Vina from -8.85 (RxnFlow) to -9.38 (CGFlow-high beta), outperforming all baselines. It also yields the highest QED scores (0.72–0.74) and highest AiZynthFinder success rate (62.2%) compared to all baselines, underscoring the practical benefits of synthesis-aware generation. CGFlow shows consistent synthesis success rate across both CrossDock (55.0%–62.2%) and LIT-PCBA (53.1%) benchmarks. > W4. Concerns about reward hacking by generating larger molecules. To address this concern, we conducted additional experiments on the first five targets, restricting heavy atom count (HAC) to 40. CGFlow still outperforms RxnFlow in Vina score (-10.94 vs -10.46) with comparable HAC (29.63 vs 29.37). Moreover, CGFlow achieves the highest ligand efficiency (0.375) - computed by Vina / HAC, confirming that our binding affinity gains stem from the 3D co-design strategy rather than molecule size. | |Vina (↓)|Ligand Efficiency (↑)|Avg Heavy atom count| |-|-|-|-| |SynFlownet|-8.644|0.335|26.44| |RGFN|-9.085|0.329|28.02| |RxnFlow|-10.457|0.362|29.37| |CGFlow (rebuttal)|-10.940|0.375|29.63| > W2 / W4: Measurement of training efficiency / The vina improvement over RxnFlow is marginal We kindly refer the reviewer to our response to Reviewer ScwC (Weakness 1) for experimental results on training efficiency - where we show CGFlow discovers 4.7× more diverse modes than RxnFlow. We note that optimization of docking scores is restricted by the saturation of the pocket's binding interactions. At that point, discovering more diverse binding modes becomes more important to maximize the success rate of practical applications. > W4. Degeneration in Success Rate (synthesizability) and synthesize steps The small drop in AiZynthFinder synthesizability arises from our transition from reaction-based generation (RxnFlow) to a synthon-based (brick-and-linker) approach. Reaction-based generation often halts prematurely if a state molecule lacks any reactive functional groups, while the synthon-based method can easily construct molecules with longer synthetic trajectories. We emphasize that our approach use the building block and synthesis reactions from Enamine REAL and xREAL, known for a wet-lab synthetic success rate of 80%. > W2/Q1. Lack of evaluation in geometrical properties and questions regarding generated poses. We evaluated various geometrical properties and Vina score of the generated poses of the top 100 molecules of local-optimized Vina optimization across 3 seeds. |Metric|Validity|Med. Energy|Med. Strain Energy|Score|Minimize|Dock|Redock RMSD<1Å|Redock RMSD<2Å| |-|-|-|-|-|-|-|-|-| |Value|$1.0\pm0.0$|$226.89\pm18.83$|$147.82\pm17.95$|$-8.749\pm0.438$|$-12.240\pm0.127$|$-12.915\pm0.104$|$13.3 \pm2.1$%|$57.3\pm10.6$%|
Summary: This paper introduces Compositional Generative Flows (CGFlow), a framework that extends flow matching to generate objects with compositional structures and continuous features simultaneously. CGFlow combines two interleaved processes: Compositional Flow for modeling the probability path of compositional structures and State Flow for managing continuous states associated with these structures. The authors apply CGFlow to drug design through 3DSynthFlow, which jointly designs molecules' 3D binding poses and their synthetic pathways. By co-designing the 3D molecular structure and synthesis pathway, 3DSynthFlow achieves state-of-the-art binding affinity across all 15 targets in the LIT-PCBA benchmark compared to synthesis-based baselines. Evaluation using PoseCheck shows that molecules designed by 3DSynthFlow exhibit more protein-ligand interactions, demonstrating the value of 3D structure and synthesis pathway co-design. This approach addresses key challenges in drug discovery by ensuring both strong binding affinity and synthesizability. Claims And Evidence: The claims in the paper are generally well-supported by clear and convincing evidence The evidence is presented in a transparent manner with multiple evaluation metrics, statistical significance indicated by standard deviations across multiple runs, and comprehensive comparisons against relevant baseline methods. Methods And Evaluation Criteria: The CGFlow framework logically extends flow matching to handle compositional objects with continuous features, which is precisely what's needed for molecular design where both structure and 3D conformation matter. The theoretical foundation connecting compositional flow with GFlowNets for discrete structure generation and state flow for continuous features is sound. For evaluation, their choice of the LIT-PCBA benchmark is appropriate as it's a standard dataset for structure-based drug design. The authors also conduct ablation studies on reward computation approaches and compare against multiple baselines including both fragment-based and reaction-based methods. Their experimental setup effectively demonstrates the value of jointly modeling 3D structure and synthesis pathways, which directly addresses limitations in prior work that focused on only one aspect or the other. However, the authors claim their approach handles "compositional objects with continuous features" broadly, but their evaluation is restricted only to molecular design. This raises questions about the true generalizability of CGFlow to other domains. Theoretical Claims: The main theoretical component appears in Appendix B (pages 14-15) where the authors develop the trajectory balance objective for their compositional flow model. The proof relies heavily on determinism introduced by fixing the random seed, which feels like a theoretical workaround rather than a principled approach. Please correct me if I'm wrong. Experimental Designs Or Analyses: The use of the LIT-PCBA benchmark with 15 diverse protein targets is appropriate and comprehensive. The authors compare against a broad range of relevant baselines including fragment-based methods and reaction-based methods. This is thorough and appropriate. However, the paper lacks details about computational requirements and training time comparisons, which is important for assessing practical applicability. Supplementary Material: Appendix B+C. Skimmed through the rest. Relation To Broader Scientific Literature: - The authors build upon prior flow matching work by Lipman et al. (2023) and extend it to handle compositional structures - The paper incorporates the GFlowNet framework introduced by Bengio et al. (2021) for exploring discrete compositional spaces. - The authors connect their work to recent advances in sequential diffusion models. - The authors address the synthesizability challenge highlighted by Gao & Coley (2020) Essential References Not Discussed: not to my knowledge. Other Strengths And Weaknesses: - Strengths: * The work addresses a significant real-world challenge in drug discovery by simultaneously optimizing for binding affinity and synthesizability. The practical utility is clear and valuable. * The figures (especially Figure 1) effectively illustrate the concept of interleaving compositional structure and continuous state generation, helping readers understand a complex methodology. - Weaknesses (just because I have to): * The paper would be stronger with more analysis of cases where the method performs poorly or limitations in certain chemical spaces. * The method introduces additional complexity compared to existing approaches. A more explicit discussion of the implementation challenges and computational overhead would provide a more balanced assessment of practical applicability. Overall, the paper presents a significant contribution by addressing the important challenge of jointly modeling 3D structure and synthesizability in molecular design, with a methodologically sound approach that could inspire future work in compositional generative modeling. Other Comments Or Suggestions: NA Questions For Authors: Could you explain the rationale behind the footnote of page 14? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed evaluation, and for recognizing the significance of our framework contribution and its application to 3D molecule and synthesis pathway co-design. > W1. More analysis of cases where the method performs poorly or limitations in certain chemical spaces. Our synthesis-based action space based on brick & linker synthons does not yet explore certain synthetic pathways such as ring-forming reactions, and nonlinear synthetic pathways, as pointed out by Reviewer ScwC. Our method predicts the 3D binding poses of intermediate states. This synthon-based generation is chosen to prevent the atoms from intermediate states being lost from reactions or forming new rings - which would degrade the accuracy of pose prediction for intermediate states. Moreover, our pose prediction module is trained on the CrossDocked2020 dataset, in which binding pockets are extracted with a distance cutoff based on the reference ligand structure, resulting in an inherently biased pocket to its reference ligand. This can negatively impact pose prediction accuracy if generated molecules optimally bind to a different subpocket from the reference ligand or are larger than the reference ligand. This issue arises during reinforcement learning-driven exploration of diverse chemical spaces. Addressing this limitation would require employing an unbiased pocket structure (e.g., using a center-based cutoff) or a full-protein structure, and we highlight this as a consideration for real-world application of our method. Finally, our training of poses is currently biased towards Vina docking poses from the CrossDocked2020 dataset. A promising direction to remove this bias is training the model on an experimental structure dataset. > W2. A more explicit discussion of the implementation challenges and computational overhead. Thank you for this suggestion! We will incorporate the following details in our final manuscript. The state-flow model (i.e., the pocket-conditional pose predictor) was trained for 100 epochs using a batch size of 32 on 4 L40 GPUs (48GB), taking a total of 18.4 hours. Importantly, as the state-flow model is trained on the CrossDocked dataset and can be reused across different test pockets, it incurs only a one-time computational cost. We also plan to release the model weights, so users will only need to train the composition-flow model tailored to their custom reward function and target. The composition-flow model is trained individually for each pocket for 1,000 steps with batch size of 64 and 80 flow matching steps. The training with GPU-accelerated docking takes 12-20 hours (depending on targets) on a single A4000 GPU (16GB). We find this computational requirement accessible for most practical drug discovery campaigns. Furthermore, the composition-flow model can also be trained in a pocket-conditioned manner (Please see SBDD benchmark in our response to Reviewer eDTU) to sample molecules for any target pocket in a zero-shot manner, making model training a one-time cost in this setting. |Flow matching steps|Avg Vina (↓)|Top 100 Vina (↓)|Training time (sec/iter)|Sampling time (sec/mol)| |-|-|-|-|-| |10|$-10.28\pm0.32$|$-14.27\pm0.59$|33|0.053| |20|$-10.24\pm0.18$|$-14.38\pm0.22$|34|0.080| |40|$-10.40\pm0.13$|$-14.51\pm0.27$|39|0.123| |60|$-10.50\pm0.14$|$-14.57\pm0.24$|44|0.160| |80|$-10.44\pm0.18$|$-14.53\pm0.16$|49|0.199| The sampling time of our model is 0.05~0.20 seconds/molecule depending on the choice of number of flow matching steps. We further analyzed how the number of flow matching steps impacts performance using the ALDH1 target with the Vina reward. Performance slightly improves with increased flow matching steps and saturates around 40-60 steps. We attribute this marginal improvement to the fact that the pose prediction module’s primary role is providing a spatial context between intermediate molecules and the pocket; thus, extremely precise pose predictions have limited additional impact on model decisions. > Theoretical Claim & Question We thank the reviewer for this insightful observation. Indeed, our current theoretical analysis leverages determinism through fixing the random seed primarily for analytical clarity. However, this leads to deterministic noise sampling for the initial synthon states; for example, when `C(=O)[*]` and `[*]NC` are sequentially added, their initial coordinates are identical. To maintain determinism while preventing synthons from receiving the same initial states, we can use the molecule size or a hash of the molecule to set distinct random seeds across the generation process. We acknowledge this limitation explicitly and highlight extending our theoretical framework to handle fully stochastic environments, using Expected Flow Networks [1] as a promising direction for future research. Reference: 1. Jiralerspong, Marco, et al. "Expected flow networks in stochastic environments and two-player zero-sum games." ICLR 2024.
null
null
null
null
null
null
null
null
Linearization Turns Neural Operators into Function-Valued Gaussian Processes
Accept (spotlight poster)
Summary: The authors propose a new approach to approximate stochastic neural networks with Gaussian weights into Gaussian processes (GPs). The approach is based on performing a linearization around the mean of the weights to obtain a GP approximation of the network. The effectiveness of the framework is shown in some examples where neural networks are used to learn PDEs. Claims And Evidence: I found the claims well supported by mathematical evidence. Methods And Evaluation Criteria: - An aspect that could be improved is the empirical evaluation. In fact, as, by definition, a linearization is a local approximation, it would be important to have experiments with different weight distributions (with difference variances) to see how the accuracy of the approximation changes with increasing the uncertainty of the distribution. Another interesting experiment would be to compare the approximation obtained with the proposed method with that of the limiting GP obtained by relying on the central theory limit [Lee, Jaehoon, et al. "Deep Neural Networks as Gaussian Processes." International Conference on Learning Representations. 2018.], which is often used to study neural network posteriors [Cardelli, Luca, et al. "Robustness guarantees for Bayesian inference with Gaussian processes." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019]. - Minor point: in the future, It would also be interesting to consider non-diagonal distributions on the weights. Theoretical Claims: Theoretical claims appear sound to me. Experimental Designs Or Analyses: Experimental analysis is sound. However, as mentioned above, it would be good to extend it a bit more. Supplementary Material: I mostly checked experimental details Relation To Broader Scientific Literature: The paper extends recent works that approximate a neural network with a Gaussian process. It does so by defining a Gaussian distribution on a functional space. This allows for the evaluation of uncertainty not only in a finite set of input points, such as in [Adams, Steven, et al. "Finite Neural Networks as Mixtures of Gaussian Processes: From Provable Error Bounds to Prior Selection." arXiv preprint arXiv:2407.18707 (2024).], and not only in the limit of infinitely width size, such as for the papers mentioned above. Of course, the approximation comes with no error bound, which I believe would be a very interesting and promising direction. Essential References Not Discussed: The references I mentioned above are not referred to in the paper, and it would be good to include those. Furthermore, a reference that should be discussed in the main text, and not only in the Supplementary, is [Khan, Mohammad Emtiyaz E., et al. "Approximate inference turns deep networks into Gaussian processes." Advances in Neural Information Processing Systems 32 (2019], where the authors also seem to rely on similar techniques. Other Strengths And Weaknesses: I believe that the key strength of the paper is to offer a new way to study the uncertainty of neural networks. Of course, the methods come with the weaknesses described above, but overall, I found the paper to be an interesting contribution. Other Comments Or Suggestions: No other comments. Questions For Authors: My questions have been already formulated above and in summary, the more important one, apart from improving the related works and the discussion of existing methods, is: - Can you please extend the experiments as suggested above? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and for highlighting several interesting directions for further exploration. Below are our responses to your main points: --- **Comparison with infinite-width GP approaches:** We agree that investigating how infinite-width GP methods could be adapted to the infinite-dimensional operator learning setting is an intriguing theoretical direction. However, our primary goal is to develop a practical, yet theoretically sound, post-hoc method that could be applied to practical (i.e., non-asymptotic) architectures. To our knowledge, infinite-limit GP formulations typically involve retraining networks as the hidden dimension grows, which seems somewhat orthogonal to our focus on uncertainty quantification that leaves the architecture untouched. If there is a specific experiment you have in mind, we would be happy to discuss it further during the discussion period. --- **Varying weight distributions:** We appreciate the suggestion to explore different variances. In practice, the choice of weight-space uncertainty might often depend on the downstream application an end-user has in mind. For instance, with a Laplace approximation and our low-rank-plus-diagonal covariance structure, the approximation defaults to the prior variance far away from the training data. The diagonal portion of our covariance is an isotropic Gaussian prior, so larger variances tend to dilute the impact of the low-rank factors (which encode data-informed structure). We tune the prior variance with respect to a validation set, but we agree that experimenting with more informative priors (e.g. non-isotropic or even non-diagonal) could be a valuable direction for future research, also in general for Bayesian deep learning. --- **Non-diagonal weight distributions:** As mentioned above, we use a low-rank-plus-diagonal structure (which leads to highly non-diagonal weight covariances in practice), but we acknowledge that a more general covariance could capture richer parameter correlations. In our experiments, increasing the rank of the covariance did not yield substantial improvements, although we suspect that combining a higher-rank structure with more informative priors may prove beneficial, both of which are supported by our framework. --- **Relevant references and related work:** Thank you for pointing out additional references. We agree that Khan et al. (2019) is an important relevant prior work on Laplace approximations. We will move it to the main text in the revised version and also add Adams et al. (2024) as an interesting tangential work (where a GMM is constructed layerwise using their activations) in the related work section. --- **Lack of error bounds and robustness guarantees for Bayesian inference:** We agree that deriving formal error bounds is a challenging yet important open problem in the entire field of Bayesian deep learning, valuable especially for applications such as operator learning. To our knowledge, the field does not currently offer rigorous, general bounds for weight space posteriors, which seems to be important prerequisite for establishing corresponding bounds for our setting. In the revised version, we will consider acknowledging this limitation and highlighting the need for theoretical guarantees on the quality of uncertainty estimates. While the robustness guarantees in Cardelli et al. (2019) are indeed an interesting starting point, it is unclear how to verify the assumptions of their work for the neural tangent kernels used in this work. Specifically, the computation of the supremum in point 3 of the "Constant Computation" section seems intractable or at least highly nontrivial in our case. Moreover, the paper seems to assume that the input space is (a subset of) $\mathbb{R}^d$ whereas, in our setting, the input space is an infinite-dimensional function space. Nonetheless, we believe that the framework of function-valued Gaussian processes provides a uniquely viable starting point for evaluating and interpreting predictive uncertainties in Bayesian deep learning. --- We hope these clarifications address your questions. We look forward to continuing the discussion and potentially design an additional experiment with your support that can further refine our work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their replies. I am satisfied with their replies. Consequently, I confirm my positive score.
Summary: The paper introduces LUNO, a framework for uncertainty quantification in neural operators using function-valued Gaussian processes. By leveraging linearization, the method propagates Gaussian weight-space uncertainty to the operator’s predictions, effectively converting trained neural operators into Gaussian random processes. LUNO provides a _post-hoc_, scalable, and resolution-agnostic Bayesian uncertainty estimation approach, evaluated primarily on Fourier neural operators (FNOs) for solving PDEs. _(for any missing input on any of the fields, please refer to the **Strengths and Weaknesses** or the **Other comments or suggestions** sections)_ Claims And Evidence: - Neural operators lack inherent uncertainty quantification, which limits their reliability in high-stakes applications. - *Evidence:* Prior work has focused on deterministic operator learning without probabilistic guarantees. - Model linearization enables efficient uncertainty propagation without retraining. - *Evidence:* Theoretical derivations show that linearization allows weight-space uncertainty to be pushed forward to the output function space. - The resulting function-valued Gaussian process belief provides structured uncertainty estimates. - *Evidence:* The paper introduces a rigorous connection between function-valued Gaussian processes and Bayesian deep learning techniques. Methods And Evaluation Criteria: The framework linearizes the trained neural operator around the mean of a Gaussian weight belief. This is interpreted as a probabilistic generalization of currying, leading to a function-valued Gaussian process. The authors evaluate LUNO on Fourier neural operators (FNOs) applied to PDE problems with multiple metrics, including RMSE, NLL and $\chi^2$ statistics. Theoretical Claims: Claims for the submission: - LUNO constructs function-valued Gaussian processes from neural operators by treating them as infinite-dimensional stochastic processes. - Probabilistic currying is introduced as a key concept, establishing equivalence with multi-output Gaussian processes. - Gaussian weight-space uncertainty can be efficiently propagated via linearization, providing a computationally tractable Bayesian formulation. Experimental Designs Or Analyses: - Evaluates low-data regimes (small training sets) and out-of-distribution (OOD) generalization. - Compares LUNO to: - Sample-based approaches (Monte Carlo sampling of weight posteriors). - Input perturbations (simulating uncertainty via randomized inputs). - Deep ensembles (training multiple independent models). - Experiments on Burgers' equation and Advection-Diffusion PDEs, analyzing predictive uncertainty under domain shifts. Supplementary Material: In the supplementary material, the authors provide extensive theoretical derivations in the appendix, including proofs for probabilistic currying and function-space Gaussian processes. Moreover, they include additional empirical results for different PDE datasets, as well as details on hyperparameter selection, training procedures, and computational complexity. Relation To Broader Scientific Literature: The paper builds on existing work in neural operators and Fourier neural operators (FNOs), connecting with research on Bayesian deep learning techniques such as Laplace approximation, variational inference, and SWAG. It relates to operator learning in function spaces, particularly focusing on function-valued Gaussian processes. Essential References Not Discussed: Although not exactly my field of expertise, the authors seem to cover extensively the literature relevant for the paper. As a mere suggestion, maybe it is worth discussing Wasserstein Gaussian processes as an alternative probabilistic operator learning approach, as well as maybe other works related to implicit stochastic processes. Moreover, related to the post-hoc approaches, a comment on the effects of the linearization approximation on the uncertainty estimates could be insightful, especially in comparison to other methods (e.g. [1]) that do not alter the predictive mean of the original model. [1] Ortega, L. A. et al. (2024, July). Variational Linearized Laplace Approximation for Bayesian Deep Learning. In International Conference on Machine Learning (pp. 38815-38836). PMLR. Other Strengths And Weaknesses: ### Strengths: - Post-hoc application avoids retraining, making it computationally efficient. This is a specially interesting feature for large-scale applications. - The presented approach is strongly theoretically grounded, with rigorous uncertainty propagation. - The method seems to be scalable, working well with high-dimensional data and large models. - While a bit dense at times, the paper is well written and structured, making it more accessible. ### Weaknesses: - The method assumes Gaussian weight uncertainty. This hypothesis could limit the model's applicability to highly non-Gaussian posteriors. - Linearization may introduce approximation errors, particularly for highly non-linear operators. This is a key point that should be adressed further, since one of the main motivations of post-hoc approaches is to avoid retraining and keeping the original model's performance. - Evaluation focuses on PDE benchmarks, broader validation on other function learning tasks would broaden the method's applicability. Other Comments Or Suggestions: - A lot of the text in the first 3+ pages of the article is devoted to mostly explaining basic concepts and results from previous work. Given the amount of work relegated to the supplementary material, it might be beneficial to streamline these initial sections to focus more on the novel contributions and include more details in the main text. Along the same lines, certain discussions like 5.2 seem more appropriate for the supplementary material. - The wording of Lemma 4.2 could be improved for clarity. - Since this work explores the connections between gradient descent and Bayesian inference, it may also be interesting to tackle the "main research question" more from the Bayesian perspective. In particular, it would be interesting to see if there is any complete Bayesian approach to this problem, where the complete loss function is derived from a probabilistic formulation of the problem. - The authors should extend the experimental part to apply their proposed approach to more complex datasets beyond toy regression tasks. - Explore whether normalizing flows or implicit stochastic processes (or maybe just implicit distributions) could further enhance flexibility beyond Gaussian assumptions. - Investigate how the method performs under stochastic gradient descent (SGD) instead of full-batch gradient descent. Questions For Authors: 1. How does this approach perform on classification tasks where aleatoric noise is often modeled differently? 2. Could the method be extended to convolutional or transformer-based architectures? 3. Could normalizing flows be used in this context to extend the method to more complex uncertainty representations? 4. Is there any approach which would allow for a more efficient storage of $\theta_0$ so that the shifted network method could be more practical in memory-constrained settings? Maybe some sort of approximation or compression technique? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and positive assessment of our work! --- **On the assumption of Gaussian weight uncertainty:** This is an important point. Our theoretical framework (see Step 3 of Section 3.2, and Appendix A, particularly Corollary A.14 and Theorem 3.2, and Section A.4) indeed apply to more general (non-Gaussian) weight-space distributions. For instance, the predictive mean and covariance expressions remain valid under arbitrary (non-Gaussian) weight-space beliefs, as long as the respective moments of the weight-space belief exist. We focus on Gaussian distributions because they yield a (closed-form) Gaussian process over the output space, enabling well-studied analytic tools (e.g., conditioning, and Bayesian experimental design). However, mixture models or other non-Gaussian distributions are an exciting direction for empirical evaluation in future work. We will clarify this more explicitly in the revised text. --- **Effect of linearization:** We want to emphasize that we only linearize the neural operator in the weights and not with respect to its input. The linearized neural operator $F^\text{lin}(u, w)$ is linear in $w$, but still highly nonlinear in $u$. Furthermore, exactly as in Ortega et al. (2024), the predictions of the original model are not altered, only extended by a covariance which depends on the chosen weight-space uncertainty. --- **Applicability beyond Fourier neural operators:** While our experiments focus on Fourier Neural Operators (FNOs), the theoretical framework applies in principle to any neural operator, including transformer-based architectures. We chose FNOs because of their popularity and the particularly efficient lazy representation they enable for the function-valued posterior process. Our implementation exploits the structure of the inverse Fourier transform for computational efficiency, which should in principle extend to other signal transforms such as Spherical FNOs. --- **On classification tasks:** This is an interesting point. Neural operators can naturally be applied to pointwise classification tasks such as semantic segmentation in computer vision. It is possible to generalize our methodology to classification tasks, in which the output GP is transformed through a link function, yielding a generalized linear model. Computing the pushforward distribution of the output GP through the link function is an active field of research and introduces additional approximations. However, a central motivation for this work is the empirical and theoretical study of uncertainty quantification for deep-learning based emulators for scientific simulations, which virtually always focuses on the high-dimensional regression setting. --- **Other methods for weight space uncertainty:** Thank you for your suggestion. Our theoretical framework allows for arbitrary (non-Gaussian) weight-space beliefs, so one could in principle use a normalizing flow to represent the weight-space uncertainty. We mostly focus on the Laplace approximation in our experimental analyses because it is cheap to compute and applicable to pretrained models, which makes it particularly promising for large models like neural operators. A strength of our framework is that the choice of weight-space uncertainty structure is kept flexible. We appreciate the reviewer's suggestion and agree that exploring richer weight-space beliefs is an interesting avenue for future research. --- **Compression of the shifted network parameters:** In most practical cases, the linearization point $w_0$ (see Appendix A.4, Step 2), is chosen to be equal to the mean $\mu$ of the weight-space belief (e.g., this is set to the weights $w^\star$ of the trained network in case of a Laplace approximation). We did formulate Appendix A.4 under an arbitrary linearization point to allow for more flexibility in the theoretical framework, but did not end up using this flexibility in practice. --- We hope these clarifications address your questions. Thank you again for your time and constructive feedback. We also thank you for the additional comments and suggestions regarding streamlining the first three pages and clarifying a few passages. We will include these corrections and references.
Summary: This paper introduces LUNO, a linearization approach for turning a nonlinear neural operator into a Gaussian random operator, thereby providing uncertainty estimates for operator learning. This is important in areas such as safety-critical prediction and out-of-distribution scenarios. The method is compared against baselines, particularly deep ensembles, in a low-data regime and an OOD setting, showing superior and/or more principled uncertainty quantification. Claims And Evidence: The authors claim to construct a function-valued Gaussian process for uncertainty quantification, linking it to probabilistic currying; this is explained convincingly in the paper. Methods And Evaluation Criteria: The proposed methods address a low-data regime and an OOD setting. These support the authors' claim that the proposed methods contribute to uncertainty quantification in such domains. On the other hand, the authors only evaluated their framework on FNO. Since their approach depends on linearization (in terms of network weights), the accuracy of the linear approximation could depend on the network architecture. Therefore, it makes sense to include an experiment evaluating other types of neural operators as well. Theoretical Claims: I have not checked the correctness of the proofs. Experimental Designs Or Analyses: Yes, I checked all which is included in the main text. See above comments about other neural operator architectures. Additionally, in line 328 left column, the authors are not clear about what ‘sample-based’ approaches are, so they should include a brief description in the main text. In the appendix, it seems like the authors sample from the distribution of weights and pushforward to the output. But this does not readily give a Gaussian process, unlike what the authors claim afterwards. This could be made clearer. Supplementary Material: I reviewed the supplementary material for the experiments, specifically Appendix D. Relation To Broader Scientific Literature: The paper provides an additional tool in the so-far ill-equipped toolkit for uncertainty quantification for operator learning. As discussed by the authors, current methods do not extend to Fourier neural operators. The method proposed builds on existing work on estimating the posterior distribution of neural network weights. Essential References Not Discussed: I have not identified any missing references, either in Gaussian processes in a neural operator setting or in linearized Laplace approximation. Other Strengths And Weaknesses: This paper is very well written and clear, and the experiments are thoughtful. It draws a link elegantly with the concept of currying in functional programming, connecting the two fields. The linearization approach is developed well and rigorously, and although i have not checked the proofs, the claims are sensible and consistent with existing intuitions about Gaussian processes. Other Comments Or Suggestions: Line 120 left column: ‘This justifies interpreting random processes as probability measures….’, I found this line not so clear - a probability measure is a set function which maps to [0,1] which the random processes spoken of are not. Perhaps better to replace as ‘random variables’. Line 131 left column: ‘A d’-output Gaussian process is a random process ….for all n\in \mathbb N and a_1,…,a_n \in \mathbb A.’ This reads a bit weird, clearer to say ‘A d’-output Gaussian process f on \mathbb A \times \Omega’ which allows you to refer to the same f later on in the sentence. Line 134 right column: The F’s in the tuple should be bolded, it is also a bit unclear without further explanations how a function could be seen as jointly Gaussian. Worth including an explanation or point to the appendix somewhere. Line 244 left column: `f(a, \mu)(x)` is a typo, this should be `f((a,x), \mu)`. Line 251 left column: upright f is really just `f^{lin}_\mu((a,x), \cdot)`, so it’s a redefinition to simplify notation. Making it clear would improve the reading experience. Figure 3 needs to be fixed, as there is no longer 3 rows as referred to in the caption. There are also two places in the paper which referred to panel 8, which is the null space projection in Figure 3 - this needs to be fixed too. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our paper and for pointing out areas where we can further clarify the presentation. --- **Applicability beyond FNO:** While our experiments focus on Fourier Neural Operators (FNOs), the theoretical framework applies in principle to any neural operator. We chose FNOs because of their popularity and the particularly efficient lazy representation they enable for the function-valued posterior process. Here, we achieved significant computational speedups by leveraging analytic properties of the inverse fast Fourier transform. --- **Clarification on sample-based approaches:** We do not claim that sample-based approaches directly yield Gaussian processes, as noted in Section 5: > We evaluate linearized predictive uncertainty (LUNO-\*) against sample-based approaches (Sample-\*), which require additional approximations to impose a Gaussian Process structure over the output space. Our linearization approach, by contrast, yields an analytic function-valued GP formulation over the output space, which we believe is a key advantage in operator learning tasks. For evaluation purposes we consider sample-based approaches that amount to positing a moment-matched GP using the Monte Carlo estimates of the mean and covariance function in the output space. In our experience, this also makes the calibration of SAMPLE-based approaches computationally more expansive. We apologize for the vague formulation and will revise the text to make this distinction clearer. --- **Typos and notational clarifications:** We appreciate your thoroughness in pointing out typos, notational improvements, and figure label inconsistencies. This helps a lot and we will make the necessary corrections in the revised manuscript. --- Thank you again for your constructive feedback. Should anything remain unclear, we are happy to clarify during the discussion period.
Summary: The paper proposes a novel framework for approximate Bayesian uncertainty quantification in trained neural operators. The approach relies on model linearisation and pishes weight-space uncertainty to neural operators' predictions. This allows the application of Bayesian deep learning methods, such as linearised Laplace approximation, to neural operators. Claims And Evidence: Claims: - A novel framework (LUNO) which provides linearized predictive uncertainty in neural operators; - Interpretation of LUNO as a probabilistic generalization of the concept of *currying* in functional programming; - Compatibility of LUNO with established methods for quantifying weight-space uncertainty in deep neural networks, including the Laplace approximation. - LUNO scales to large models and datasets and, like neural operators, is inherently resolutionagnostic. Evidence: Proofs and case study on Fourier neural operators. The claims are well supported by the evidence. Methods And Evaluation Criteria: Yes, the evaluation criteria are appropriate. Theoretical Claims: I went over proofs until Corollary A.13. I did not spot any issues and could generally follow the flow of the proofs. However, since this is not my main expertise, I could have missed something. Nevertheless, the way the theoretical arguments are structured is natural and consequential. Experimental Designs Or Analyses: Yes, the experimental design is suitable and experiments provide detailed evaluation of the proposed approach. Supplementary Material: Yes. Part of the proofs and supplementary figures. Relation To Broader Scientific Literature: The paper is in the domain of neural operators and generally touches on uncertainty quantification and utilization of Gaussian processes. Essential References Not Discussed: None are specifically missed as far as I can tell. Other Strengths And Weaknesses: Strength: - The paper is very clearly structured and motivated. - The experiments are detailed and well-described. - The limitations are discussed and acknowledged. Weaknesses: - No code is available for reproducibility of the experiments as far as I can tell. Other Comments Or Suggestions: No other comments. Questions For Authors: Are the authors planning to release the code with the paper in the future? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive review of our paper! We are glad to hear that you found the theoretical and experimental aspects clear and well-structured. Should anything still remain unclear, we are happy to clarify during the discussion period. --- **Publication of code:** We intend to release the code for our experiments upon acceptance. To preserve anonymity during the double-blind review process, we have refrained from sharing it at this stage due to dependencies and references that could potentially reveal our identity. That said, we are fully committed to ensuring reproducibility and will provide the complete codebase. If access to the code would be helpful during the discussion period, we would be happy to invest additional time to clean and anonymize it appropriately.
null
null
null
null
null
null
The Missing Alignment Link of In-context Learning on Sequences
Accept (poster)
Summary: Authors study the limits of LLMs’ abilities for in-context learning, focusing on learning sequence to sequence alignment (in the machine translation sense of the word). Authors design synthetic experiments that probe said ability and demonstrate that several modern llama 3 variants do indeed fail to learn alignment in-context. To combat this, authors introduce ICA-Tune, a PEFT strategy that learns to adjust the attention weights within the LLM. Authors analyze the fine-tuned models and demonstrate that ICA-Tune learns input-output alignment by forming new induction circuits in the middle layers of the model. Claims And Evidence: The main claims in the paper are, from my perspective: 1. (L95) "the inability of modern LLMs to in-context learn structured Seq2Seq tasks" - stemming from their inability to learn the task-specific alignments (in MT sense) beyond very short sequences; 2. That the proposed method, ICA-Tune, can alleviate said problem through a mixture of parameter-efficient fine-tuning and in-context learning; 3. The existence of several phenomena within LLM representations, e.g. the specific induction circuit reported in (L324). To support these claims, authors design a synthetic in-context-learning problem generator inspired by classical machine translation and demonstrate (with reasonable ablation) that the LLM in question fails to learn the alignment. They then demonstrate how this can be rectified with ICA-Tune. In my opinion, claims 2 and 3 are properly supported, but the first claim is arguably too general. **Overclaim about generality.** My main concern is that the paper oft makes general claims about “modern LLMs” (e.g. L95-97 “the inability of modern LLMs to …”) that are, in my view, not properly supported and may mislead readers. Authors only consider Llama 3 8B in the main paper and Llama3.2-1B & 3B in appendix E.1: all relatively small and from the same (albeit popular) model family. This leaves several potential ways that the hypothesis may turn out false: **A. Emergent abilities?** it could be that modern LLMs do learn alignment, but it only emerges only after a certain model size (e.g. 70B, 405B). LLMs have been known to ‘sprout’ new emerging abilities in the past [1]. From the current analysis, it is unclear if modern LLMs are fundamentally unable to learn alignment (as the authors claim) or if this is only a failing of the particular LLMs chosen for the study. **B. Model idiosyncrasy?** while less likely than the previous hypothesis, it is possible that there is something about specific Llama 3 training or inference conditions that affect their ability to learn alignment. This can be eliminated by testing on other models: latest qwen 2.5 / mistral / deepseek / qwq model variants. Unless authors can eliminate these possibilities, I would argue that the claims need to be rewritten substantially (e.g. the inability of modern LLMs -> the apparent inability of such and such LLM types under such conditions) or toned down. Note that double-checking this claim does not require significant expenditure of resources: to the best of my knowledge, one may test the state-of-the-art LLMs’ ability to learn alignment **using public APIs**, without any specialized hardware. This includes both open models on free tier endpoints (e.g. lmarena.ai, deepseek R1) and commercial models (e.g. openai API, anthropic, google, etc), since the test only requires in-context learning and inference. . If authors find that a diverse set of SoTA models consistently fails to learn alignment, it would strengthen the claims significantly. Methods And Evaluation Criteria: Overall, they do indeed make sense. The paper makes a deliberate choice to evaluate on synthetic data, which determines both its strengths and weaknesses. On the positive side, the controlled experiment allows authors to better isolate the phenomenon, vary experiment parameters and design counterfactual experiments for ablation. On the negative, it leaves a chance that the proposed solution (ICA-Tune) may not transfer as well to real world data. Theoretical Claims: To the best of my abilities, the main claims in the paper are empirical, not theoretical. Experimental Designs Or Analyses: While the experiments are limited to synthetic problems, they are generally sound. Authors consider reasonable setups and perform additional ablations (e.g. p.4 right) to verify their observations. My main concern about the choice of LLMs and the general claims about LLM abilities is described above in the "Claims And Evidence" section. Supplementary Material: First, I would like to commend the authors for attaching the supplementary code. While I make several critical statements about the code below, overall it is great that the code is available. **Lack of readme** To the best of my knowledge, there are no clear instructions for reproducing the experiments - there are several notebooks and a script.sh that seemingly performs ICA-tune, but the scripts themselves require a special arrangement of files and dependencies that are never specified. It would be best to add a README file in the final version of the supplementary materials. **Unclear library versions** The code uses several non-standard libraries (e.g. transformers, peft) that are known to break their interface between major versions. It would be best to specify the exact version of every library in requirements.txt or a similar way and, if any complex installation steps are neccessary, explain them in the (currently absent) README. Relation To Broader Scientific Literature: The paper relies on popular prior work in the LLM community: the chosen model family, LoRA adapters, circuits, etc. Essential References Not Discussed: Authors investigate a narrow (but important) capability of modern LLMs and reviews related works in S2, and, to the best of my knowledge, their review is sound (but not encyclopedic). Though, it is not impossible that I missed some other relevant work. Other Strengths And Weaknesses: **Strengths** A simple and practical patch for a specific problem, no overthinking. Even if it doesn’t generalize to all LLMs, it’s still useful to many. Within the one synthetic task they consider, author go to commendable length to ablate their findings (S3, S4). Analysis of the ICA-Tune-d models, if mostly mechanistic, is a great addition. **Weaknesses** As I specified earlier in "Claims And Evidence", I believe the main weakness of the paper to be overclaim: authors make conclusions about "modern LLMs" in general, but only consider one model family, and only relatively smaller models. While the paper could also be strengthened by real world experiments to confirm the efficacy of ICA-Tune, it appears to be a deliberate choice and not a weakness. Other Comments Or Suggestions: **Minor suggestions** - please close opened files (e.g. dataset.py L27-31) - model loading (icatune.py L39, 44) makes it unclear which llama3 are you loading and what format must one prepare this model in - please check the code for unused dependencies (e.g. re in icatune.py) Questions For Authors: (minor, extension of OOD generalization experiments) From a practitioner's point of view, are there feasible ways to extend ICA-Tune to 'pre-patch' known LLMS ahead of time so they would be able to learn alignment in-context for a broader range of unknown ICL tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful feedback. We will address the issues with the supplementary material in the final submission. Below, we respond to the other concerns. A. We repeated the experiments of Figure 3 on multiple LLMs. We observe similar trends. We report below the numbers for m = 8, c = 1 where we compare the Standard and Pre-Aligned prompting method. We observe that even large models exhibit poor sequence prediction accuracy. Also, interleaving the x and y tokens as a pre-aligned token sequence leads to significant jump in accuracy, indicating that lack of alignment is a major reason for poor performance. | Model | Standard | Pre-Aligned | |-----------------|----------|-------------| | Llama-3.3-70B-Instruct-Turbo | 20 | 91.25 | | Llama-3.1-405B-Instruct-Turbo | 31.25 | 98.75 | | gpt-4o | 36.25 | 100.0 | | claude-3-7-sonnet-20250219 | 51.25 | 82.5 | B. We also repeated the experiments of Figure 4 on the learning dynamics of ICATune on a Qwen2.5-3B model. The results can be found in this [link](https://drive.google.com/file/d/1R90cBSwCaBkumSwibopWckyVIEDZxpaj/view?usp=sharing). We observe the same conclusions over the three plots --- (1) Sudden jump in prediction accuracy of the y tokens at the start of a y-phrase, (2) Emergence of alignment in middle layers (23,24) of the LLM. (3) Followed by IC-Lookup ability in higher layers (28,29) above the layers where alignment emerges. This Qwen model is smaller than the Llama model-3 and yet we observe the same pattern. C. We also present results on real translation dataset for three language pairs. Please see the results in the response to reviewer roK1.
Summary: This paper systematically investigates the in-context learning (ICL) capabilities of large language models (LLMs) on sequence-to-sequence (Seq2Seq) tasks. The analysis reveals that LLMs struggle to align input and output sequences for longer inputs, limiting their ICL effectiveness. To address this, the authors propose ICA-Tune, a method that fine-tunes attention parameters using in-context examples and a next-token prediction objective. Compared to standard fine-tuning, ICA-Tune demonstrates superior sample efficiency and better generalization to out-of-distribution (OOD) instances. Claims And Evidence: see strengths and weaknesses Methods And Evaluation Criteria: see strengths and weaknesses Theoretical Claims: see strengths and weaknesses Experimental Designs Or Analyses: see strengths and weaknesses Supplementary Material: yes Relation To Broader Scientific Literature: see strengths and weaknesses Essential References Not Discussed: see strengths and weaknesses Other Strengths And Weaknesses: Strengths 1. This paper presents the first formal evaluation of ICL on Seq2Seq tasks, which has significant practical implications for applications such as instant translation and text-to-SQL. 2. The counterfactual experiments provide valuable insights, highlighting that alignment is the missing link in Seq2Seq ICL. The mechanistic evaluation of alignment emergence in middle layers is particularly insightful. 3. The proposed ICA-Tune method demonstrates clear improvements in sample efficiency and OOD generalization, supported by rigorous empirical experiments. Weaknesses 1. The analysis is primarily conducted on synthetic data, justified by the need to avoid data contamination (L33). However, it would be beneficial to evaluate ICA-Tune on tasks that LLMs have encountered during pre-training, such as translation. This could help determine whether Seq2Seq ICL is an online learning process (as assumed in this paper) or a task-level generalization mechanism that composes pre-existing knowledge [1]. I'd like to hear the authors' opinion about the other ICL perspective (generalization via pre-existing knowledge composition [1]). - [1] What Do Language Models Learn in Context? The Structured Task Hypothesis, ACL'24 2. After fine-tuning with ICA-Tune, does the LLM’s performance degrade on general Seq2Seq tasks (e.g., translation)? If so, this could indicate a limitation of ICA-Tune’s applicability. Is it possible to explore methods that specifically learn alignment in a way that generalizes across diverse Seq2Seq tasks? 3. In L84, references to previous studies would be more effective if explicitly cited. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the constructive suggestions. We address some of the concerns below. (1) Pre-existing knowledge composition is a plausible hypothesis to explain ICL on real tasks. However, the experiments in [1] were on scalar prediction tasks. We conjecture that for structured sequence prediction tasks, the alignment of output substructures with relevant input phrases is necessary for compositional generalization, irrespective of whether ICL is due to online learning or knowledge composition, or a combination of the two. (2) Catastrophic forgetting of previous tasks, for example translation of real tasks after fine-tuning on synthetic tasks, is present in both standard fine-tuning and ICA-Tune. However, on variants of the same synthetic task, ICA-Tune does generalize better as we discuss in the last paragraph of Section 4.2. We have been thinking hard about methods that specifically learn alignment in a way that generalizes across diverse Seq2Seq tasks. Section 5 presents a discussion on why such a capability could be difficult in standard causal transformers. However, it may be possible to design multi-task in-context datasets for structured prediction tasks that enables LLMs to in-context learn alignments for similar tasks. (3) Thanks for pointing out. We will explicitly cite previous studies on L84 in the final version of the paper.
Summary: This work presents an interesting case study on LLM's in-context ability on translation-style seq2seq problems. More specifically, this paper studies seq2seq problem involving both learning the alignment and the target-side vocabulary. This work creates synthetic tasks for the analysis to avoid training data leakage. The synthetic tasks are composed of a PCFG generating the source, PFA generating the target vocabulary, and a sampling function generating the alignment. By manipulating hyperparameters of the synthetic tasks and checking the attention weights, this work notices that LLM struggles at learning the alignment. Then, this work also proposes a fine-tuning-based method to improve this specific ICL ability and show that it is more effective than IID supervised learning. Claims And Evidence: This work mainly makes claims on two parts: (1) an analysis showing LLMs struggle at learning alignment through ICL; (2) a new fine-tuning method called ICA-Tune that helps LLMs to learn alignment. I find the synthetic task-based analysis quite convincing. The experiment setup is clear and rigorous, and the attention weight-based analysis shows clear patterns. I also believe that ICA-Tune improves the model's ability to learn alignment, but the comparison between ICA-Tune and standard SFT is a bit weird. Essentially, they work on two different tasks. The ICA-Tune is learning an in-context learning task, while the standard SFT is learning an IID seq2seq task. I also couldn't find related results on "latest PEFT methods" in the paper, as mentioned in line 108. Methods And Evaluation Criteria: See above. Theoretical Claims: No major theoretical contributions. Experimental Designs Or Analyses: See above. Supplementary Material: No major issues. Relation To Broader Scientific Literature: This work can be seen as a good next step related to previous work studying LLM's ability to learn regular language. The synthetic experiment design is clean and hopefully will inspire more future work on detailed analysis of the limitations of LLM's in-context learning abilities. Essential References Not Discussed: No essential missing references. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: Line 384: does -> does not? Figure 2 and Table 1 use very similar examples, but Figure 2 says "attention is non-monotonic" and Table 1 says "alignment is monotonic". This is really confusing, even though I can understand how the grammar works from the main text. Questions For Authors: 1. For ICA-Tune, do you only train on the k+1-th example for the k-shot learning input, or do you also train on all previous k outputs? 2. My understanding is that due to the use of PFA to generate y-phrases, each output is non-deterministic. Is my understanding correct? If so, when computing prediction accuracies, do you need to consider all correct outputs? And have you done experiments checking if this non-determinism makes learning alignment much harder? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We respond to the questions below. (1) We train on all y-tokens, including the outputs from all previous k examples. For the comparison between standard fine-tuning and ICA-Tune, we scale the batch size for standard fine-tuning to match the number of examples used in each batch for ICA-Tune. (2) Yes, your understanding is correct. When computing accuracy, we evaluate predictions against all valid outputs defined by the PFA. Yes, the learning could be harder, but we wanted to be close to real tasks where such non-determinism is expected.
Summary: This paper investigates a critical challenge in in-context learning for sequence-to-sequence tasks, where they find modern LLMs struggle to learn alignments between input and output sequences in-context. The authors first show that providing explicitly aligned in-context examples dramatically improves performance. They then introduce ICA-Tune, a fine-tuning method using in-context examples, which not only improves accuracy compared to standard supervised fine-tuning but also yields better sample efficiency and out-of-distribution generalization. A detailed mechanistic analysis demonstrates that ICA-Tune enables the emergence of input–output alignments in the middle layers of the LLM, even without direct supervision, thereby facilitating the formation of induction heads for effective token distribution learning. Claims And Evidence: The experimental results generally support the paper’s claims. However, several aspects could be further controlled or clarified: 1. When using pre-aligned prompting, the increase in prompt length (and thus the number of tokens) may provide the model with additional context and compute used in attention, potentially biasing the comparison. 2. Similarly, the in-context fine-tuning naturally benefits from extended sequences, raising the possibility that performance gains might partly stem from this increased seq len and computation rather than from the alignment learning per se. Methods And Evaluation Criteria: see other points Theoretical Claims: there is no theoretical claims in the main paper, the paper is largely empirical and mechanistic Experimental Designs Or Analyses: see other points Supplementary Material: i reviewed most of the appendix Relation To Broader Scientific Literature: this paper is related to in-context learning mechanisms, particularly building on work about induction heads and this paper is extending this to the seq2seq domain. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: 1. The paper offers a clear and formal formulation of the seq2seq in-context learning problem. 2. The proposed ICA-Tune method is both simple and effective, demonstrating substantial improvements in performance. 3. The mechanistic analysis is thorough, with well-designed probes that provide insights into how alignment emerges within the model. Weaknesses: 1. The notation in Section 3.1 is somewhat confusing. For example, in Equation 2, it is not immediately clear to me that q indexes a token in the input x while p serves as the phrase index. Clarifying that the task maps a token in x to an entire phrase in y would improve readability. 2. The task formulation is highly synthetic—each input token is aligned with a single output phrase. This simplification may not fully capture the complexity of real-world seq2seq tasks. 3, The improvement observed with pre-aligned in-context sequences might be partly due to the longer prompt (i.e., more tokens providing additional context and compute) rather than solely due to better alignment. 4, The comparison between ICA-Tune and standard fine-tuning might be influenced by the fact that the test examples in the ICA-Tune setting are less out-of-distribution compared to those in the standard fine-tuning setup. Overall, while the study is valuable for understanding seq2seq ICL, the highly controlled synthetic nature of the experiments may limit the generality of the findings and robustness of their claims. Other Comments Or Suggestions: please see other points Questions For Authors: please see other points Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We address the concerns below. (1) Thanks for pointing this out; we will make the clarification in the final version of the paper. (2) We agree the task is synthetic but it is related to the model followed in early statistical machine translation models. Further, we present an evaluation of ICA-Tune on real translation tasks described below. Here also, we observe that ICA-Tune is more sample efficient and generalizes better to OOD instances. We conduct standard fine-tuning with ICA-Tune on three real Machine Translation tasks — from English to {Lithuanian(lt), Hindi(hi), Tamil(ta)}. In each case, we evaluate both on the in-distribution (ID) test set and on an out-of-distribution (OOD) test set. Details of the experiments and the results appear below. ### Experiment Details | Model Used | Llama-2 7B | |----------------|-----------------------------------------------| | Number of Training Examples | 40,000 | | Train Dataset | [Europarl (For En-Lt)](https://opus.nlpl.eu/Europarl/corpus/version/Europarl), [Samanantar + BPCC + Speech Lab (For En-Hi)](https://huggingface.co/datasets/hamees/INDiC-BPCC-hq), [Samanantar (For En-Ta)](https://huggingface.co/datasets/ai4bharat/samanantar) | | Test Dataset | [Flores (In-Domain for all three language pairs)](https://huggingface.co/datasets/openlanguagedata/flores_plus), [Tanzil (Out-of-Domain for En-Hi, En-Ta)](https://opus.nlpl.eu/Tanzil/corpus/version/Tanzil), [EMEA (Out-of-Domain for En-Lt)](https://opus.nlpl.eu/EMEA/corpus/version/EMEA) | | Epochs | 2 | | Batch Size | 2 | | Learning Rate | 0.0002 | | LR Scheduler | Linear | | Warmup Steps | 500 | | Grad Accumulation Steps | 1 | | Weight Decay | 0.0 | | Label Smoothing| 0.001 | | LoRA Rank | 256 | | LoRA Alpha | 512 | | LoRA Dropout | 0.05 | We report the results after the two modes of fine-tuning: ### COMET-22 Scores (Higher is better) | | En-Lt (ID) | En-Lt (OOD) | En-Hi (ID) | En-Hi (OOD) | En-Ta (ID) | En-Ta (OOD) | |---------------|------------|-------------|------------|-------------|------------|-------------| | Standard Fine-tune | 0.63 | 0.70 | 0.68 | 0.58 | 0.57 | 0.61 | | ICA-Tune | 0.71 | 0.75 | 0.72 | 0.63 | 0.69 | 0.76 | We observe that for both in-distribution and out-of-distribution test examples, ICA-Tune performs better than standard fine-tuning. (3) We repeat experiments with a reduced number of examples (14 instead of 16) in the pre-aligned sequence to ensure that the total prompt tokens match the standard version. Below, we report accuracy for the reduced pre-aligned setting. Even after equalizing the number of tokens, the pre-aligned prompting significantly outperforms standard prompting. | Model | Standard | Pre-Aligned | Reduced Pre-Aligned | |----------------|----------|-------------|----------------------| | Llama-3.2-1B | 27.5 | 100 | 99.38 | | Llama-3.2-3B | 30.63 | 95.94 | 95 | | Llama-3.2-8B | 36.25 | 98.44 | 99.38 | (4) For the comparison between ICA-Tune and standard fine-tuning, we use the exact same set of OOD test examples. The only difference is that, in ICA-Tune, we prepend each test example with k examples from the training set. We have mentioned this in Subsection 4.2, but we will clarify it further.
null
null
null
null
null
null
DVI:A Derivative-based Vision Network for INR
Accept (poster)
Summary: The paper presents DVI, a Derivate-based Vision network for INRs (Implicit Neural Representations). It consists of a neural network architecture which combines pre-existing, task-specific architectures working on raster data, like images or voxel maps, with INR feature extraction modules, that process the derivative maps obtained from the INR. The features computed by the conventional network and those computed out of the derivative maps are fused by feature fusion modules at several layers. The fused features replace the features from raster data in the task-specific network. The paper also claims to contribute a technique to reduce the computational cost to compute the derivative map from the INR compared to using autograd. The paper presents experiments on multiple pixel- or voxel-wise tasks, like image super-resolution, image denoising, 3D volume segmentation, video deblurring and video optical flow estimation. The paper also reports several ablation studies. ## Update after rebuttal I carefully read all reviews and responses. The authors confirmed my doubt about the novelty with respect to the derivations presented in (Xiao et al., 2023) and will revise the paper accordingly. The other reviews do not uncover critical weaknesses and the responses to them include additional experiments, which confirm the empirical evidence already reported in the paper, and sketches of theoretical arguments, which seems valid and convincing to me. I'd actually suggest to include the latter in the revised appendix. For all these reasons, I confirm my overall recommendation Claims And Evidence: 1. The paper claims to "proposes a novel technique to reduce the complexity of the derivative computation" (line 114). Yet, in section 3.3 the paper states "We use the recursive formula for high order derivatives in (Xiao et al., 2023) to compute the derivative map at an accelerated speed." Indeed, as far as I can tell, the derivations in the appendix, bar a change of notation, are the same presented in (Xiao et al., 2023), which presents a general framework to compute derivative maps of generic neural network, therefore covering also the case of INRs (i.e. fully connected networks). If this is the case, the authors should remove the claim on the novelty of the technique, and just state that to compute derivate maps of INR they rely on the accelerated recursive operators defined in (Xiao et al., 2023). I'd suggest to remove also section 3.3, and use the space to increase the size of tables and figures. If this is not the case, the I'd suggest to the authors to point out clearly what are the main differences with respect to (Xiao et al., 2023). 2. The method is based on the claim that "high order derivative map of INR encapsulates the semantic information" or, similarly "The derivatives from INR are effective because they encode semantic information during the fitting process". It is never clarified what "semantic information" are encoded in the INR. The claim is said to be validated by the higher performance obtained by DVI with respect to conventional raster-based neural networks. The authors indeed control for the confounding factor given by the larger capacity of the model with the fusion modules by feeding into it zero or random derivatives and verifying that in this case the DVI add-on does not improve performance. Therefore, I believe that the authors have provided clear and convincing evidence that the derivative maps are useful to increase performance on several tasks. Yet, they haven't shown that these high-order derivative maps "encapsulates" semantic information. They haven't even clarified what they mean by "semantic information". I'd suggest to reword the claims on INR containing semantic information into sentences that more plainly and clearly state that high order derivatives from INRs are shown to be useful to solve tasks more effectively. Methods And Evaluation Criteria: The proposed method is tested on several benchmarks for 5 different, pixel-level tasks. I'm not an expert in all the tasks, but the diversity of tasks makes me positive toward the soundness of the evaluation procedure. Theoretical Claims: I checked at a high-level the derivation of the fast derivative map operator in the appendix. They seem to me to resemble closely the derivations in (Xiao et al., 2023). If this is confirmed by the authors, I'd suggest to remove them from the appendix and point the reader to the original paper. Experimental Designs Or Analyses: I had a doubt about the comparisons against pre-existing networks, since the INR processing module and, more importantly, the fusion module, add complex operations like cross- and self-attention to the computational graph of the network, which may have invalidate the comparisons. Yet, the ablation studies in section 5.2 and 5.3 show that this is not the case: when feeding into these modules zero or random derivatives, performance regresses to that of the pre-existing network; at the same time, when concatenating the derivative in input to the original network, performance increases. This validates that the higher performance is due to the information provided by the derivative map of the INR, and makes the main comparisons sound. I'd suggest to tone down the claims on the inability of INSP (Xu et al., 2022) to handle the tasks. The method is trying to solve a much harder task, i.e. processing INRs by processing only its weights, i.e. without materializing the discrete signals. It is good to have it as a baseline to show that this approach, while intellectually more satisfying, gives at the moment inferior performance. But it should be noted in the main text that the comparison is greatly affected by this fundamental difference between the methodologies. Supplementary Material: I reviewed most of it. I'd suggest to move B.1 about normalization of the derivative maps into the main paper. We know since batch norm that normalization may have non-negligible and hard-to-anticipate effects into the training and inference performance of neural networks. Hence, I'd suggest to make it integral to the methodology how derivative maps are normalized, and do not treat it as an implementation detail. Relation To Broader Scientific Literature: To the best of my knowledge, the idea of combining pre-existing networks with derivative maps of INRs to solve dense, pixel-wise tasks has never been explored before. Essential References Not Discussed: All relevant references are cited or discussed. I just note here that the References section needs some cleaning. Three random examples: "Neural processing of tri-plane hybrid neural fields." has been published, so it is not anonymous anymore; "Neural functional transformers" was published in NeurIPS 23; (Xiao et al., 2023) was published on TPAMI as of December 2024. Other Strengths And Weaknesses: S1. The proposed method tackles the unexplored problem of solving dense tasks while processing INRs. S2. The proposed method achieves very good performance on a variety of tasks. S3. The experimental results and the thorough ablation studies provide empirical evidence for most of the claims. W1. The main weakness of the proposed method is the need to materialize the raster signal. Ideally, a method processing INRs because of their continuous nature should be able to perform on par with existing methods processing the original, discrete signals without the need to query the INR to reconstruct the underlying signal, which can be slow and requires to make arbitrary decisions like the resolution or the point of view of the rendering for NeRFs. Other Comments Or Suggestions: Line 064 -> "Vision tasks can be divided into low-level and high-level categories." I don't think this is common terminology and it wasn't clear to me what the authors were referring to. A suggestion could be to use "pixel-wise" or "pixel-level" tasks, like denoising, as opposed to image-wise tasks like classification. Line 072 Video FE -> acronym undefined I'd suggest to change the description of "Neural processing of tri-plane hybrid neural fields." at lines 89-92. Currently, it reads "However, this INR is not suitable for other types of vision data beyond point clouds, and its representation accuracy is lower than SIREN(Sitzmann et al., 2020b)." Both statements are false: in the paper they process several fields, obtained from voxel maps, meshes, and even NeRFs; and Tab. 1 shows that they are equivalent to SIREN, e.g. slightly worse on point clouds but slightly better on meshes. line 153: This information is then intricately integrated -> "intricately" does not seem to me the right word here line 155: two-pronged -> unclear, please reword line 185: we will delve into the intricate details of DVI. -> again "intricate" seems misplaced here line 306: caption of table 2 "denosing" All the figures in Section 5 have wrong references. For instance, line 326 "Figure S6 shows" should be Figure 4. Same for the references to Fig. S7(a), S7(b), S8 and S9 at the beginning of the subsequent sections. Table 7 "neumrical". Moreover, it is not clear how numerical derivatives are used to obtain the reported results, nor to which other columns it should be compared. Please improve the organization of the table. Questions For Authors: My main request is to clarify the relation with the derivations presented in (Xiao et al., 2023), and fix the claims throughtout the paper if they are confirmed to be the same presented in the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive feedback. We address your questions point by point below: ## Q1: Clarify the relation with the derivations presented in (Xiao et al., 2023) We confirm that we use the recursive formula for high order derivatives in (Xiao et al., 2023) to compute the derivative map at an accelerated speed. We will follow your suggestion to revise our claims throughout the paper as requested. We will also follow your suggestion to remove section 3.3 and use the space to increase the size of tables and figures. ## Comments or suggestions: * We will follow your suggestion to reword the claims about INR containing semantic information into sentences that more plainly and clearly state that high order derivatives from INRs are shown to be useful in solving tasks more effectively. * We will follow your suggestion to tone down the claims regarding INSP's (Xu et al., 2022) inability to handle the tasks. We will also note in the main text that the comparison is greatly affected by this fundamental difference between the methodologies. * We will follow your suggestion to move Appendix B.1 about normalization of the derivative maps into the main paper to make it integral to the methodology for how derivative maps are normalized. * We will follow your suggestion to address the issues you identified in Line 064, Line 072, Lines 89-92, Line 153, as well as the problems in the Figures and tables. We sincerely appreciate your thorough review and insightful questions, which will significantly improve the quality of our research! --- Rebuttal Comment 1.1: Comment: I carefully read all reviews and responses. The authors confirmed my doubt about the novelty with respect to the derivations presented in (Xiao et al., 2023) and will revise the paper accordingly. The other reviews do not uncover critical weaknesses and the responses to them include additional experiments, which confirm the empirical evidence already reported in the paper, and sketches of theoretical arguments, which seems valid and convincing to me. I'd actually suggest to include the latter in the revised appendix. For all these reasons, I confirm my overall recommendation
Summary: This paper proposes a framework that combines implicit neural representations (INRs) with a traditional raster-based vision network, leveraging high-order derivatives to capture additional semantic or structural information. Experimental results demonstrate performance improvements across a variety of tasks, such as super-resolution, denoising, segmentation, and video processing. Claims And Evidence: The claims regarding performance gains are supported by sufficient empirical results on multiple benchmarks. No other major claims appear to lack evidence. Methods And Evaluation Criteria: The proposed method and its chosen benchmark tasks (e.g., image denoising, 3D volume segmentation) are appropriate to showcase the contribution of derivative-based features. Theoretical Claims: The paper relies on the premise that high-order derivatives from INRs encapsulate semantic information not captured by purely raster-based methods, but provides limited theoretical explanation for why derivatives specifically convey such semantics. A clearer theoretical underpinning would strengthen the argument that these derivative features genuinely reflect deeper semantic cues beyond standard raster data. Experimental Designs Or Analyses: The experiments appear well-structured, with ablation studies and comparisons to both baseline raster-based and INR-based approaches。 Additional discussion about potential biases in data pre-processing or hyperparameter tuning would be helpful. Supplementary Material: I reviewed the Appendix. Relation To Broader Scientific Literature: They extend prior implicit neural representation research by proposing a derivative-based approach to incorporate semantic information to bridge the gap between continuous function representations and traditional raster-based architectures. Essential References Not Discussed: Related works are well discussed. Other Strengths And Weaknesses: 1. The idea of progressively fusing high-order derivative features into a standard vision backbone is novel and shows promise in multiple settings. 2. The computational overhead is not discussed. Will the increased computational overhead for deriving and processing higher-order derivatives limit practical scalability? 3. My major concern lies in the theoretical justification of the why high-order derivations help capturing semantic information that raster-based methods connot. Other Comments Or Suggestions: There should be a space between the references and the main text. For example: "making it extensively applicable in various vision data representations such as images(Strumpler et al. ¨ , 2022)", it should be "making it extensively applicable in various vision data representations such as images (Strumpler et al. ¨ , 2022)". Inproper citations: "Anonymous. Neural processing of tri-plane hybrid neural fields. In Submitted to The Twelfth International Conference on Learning Representations, 2023. URL https://openreview.net/forum? id=zRkM6UcA22. under review". It has an arxiv version with author names. Questions For Authors: See comments above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive feedback. We address your questions point by point below: ## Q1: Discuss the computational overhead We provided the MAC (Multiply-Accumulate Operations) for all algorithms in Tables 1-4 of our submission, including tests on more complex generative networks. **For your convenience, we summarize key results below**: **Table R1: Performance improvement and computational overhead.** | Method/Dataset/Metric | Performance (Baseline → Ours [+Gain]) | MAC in G (Baseline → Ours [+%])| |:---------------------:|:--------------------------------------------:|:-------------------------------------------:| | StableSR/Manga109/PSNR↑ | 26.81 → 28.97 [**+2.16**] | 12453 → 12581 [**+1.0%**] | | EDSR/Urban100/PSNR↑ | 23.49 → 24.71 [**+1.22**] | 1532 → 1681 [**+9.7%**] | | DiffBIR/McMaster/PSNR↑ | 33.98 → 35.79 [**+1.81**] | 3596 → 3690 [**+2.6%**] | | VNET/Synapse/DSC↑ | 72.62 → 83.60 [**+10.98**] | 31 → 43 [**+38.7%**] | | PVDNet/GoPro/PSNR↑ | 25.98 → 27.09 [**+1.11**] | 250 → 338 [**+35.2%**] | | FlowFormer/Sintel/EPE↓ | 6.35 → 5.67 [**-0.68**] | 93 → 139 [**+49.5%**] | Our approach **improves performance with acceptable overhead across all tasks**. For large networks (StableSR, EDSR, DiffBIR), the overhead is minimal (1.0%-9.7%). Smaller networks show higher relative increases but maintain reasonable absolute MAC counts. We achieve this efficiency through our MLP-specific derivative computation paradigm, **ensuring scalability for larger networks** where relative overhead becomes increasingly negligible. Future implementations could further increase scalability using PyTorch's FlashAttention-V2. ## Q2: Why INR's high-order derivations help capturing semantic information **INR's high-order derivatives contain richer spectral information than raster representations**, providing more comprehensive semantic information for visual tasks.**Due to space limitations, we provide concise theoretical proofs for three visual tasks**, with the remaining proofs being similarly derived. ### Definition Let $I\_d \in \mathbb{R}^{H \times W \times C}$ be a discrete image and $I\_h = \\{I\_{h\_0}, I\_{h\_1}^{1,0}, ..., I\_{h\_n}^{i,j}, ...\\}$ be the higher-order derivative map, where $I\_{h\_n}^{i,j}$ represents the sampled partial derivative $\frac{\partial^n I}{\partial x^i \partial y^j}$ with $i+j=n$. Here, $I$ denotes the continuous function represented by the INR. ### Analysis: Super-Resolution **Proposition:** In super-resolution, $I\_h$ contains richer semantic information than $I\_d$. **Justification:** Defining semantic richness as: $$\mathcal{S}\_{SR}(I) = \int\_{\\|\omega\\| > \omega\_0} |\hat{I}(\omega)|^2 d\omega,$$ where $\omega_0$ is the Nyquist frequency in $I\_d$, we can derive: $$\mathcal{S}\_{SR}(I\_h) = \sum\_{n=0}^{N} \sum\_{i+j=n} \int\_{\omega\_0 < \\|\omega\\| \leq \omega\_N} |\omega\_x|^{2i}|\omega\_y|^{2j}|\hat{I}(\omega)|^2 d\omega.$$ Since $|\omega_x|^{2i}|\omega_y|^{2j} > 1$ for $\\|\omega\\| > \omega_0$ and $(i,j) \neq (0,0)$: $$\mathcal{S}\_{SR}(I_h) > \int\_{\omega_0 < \\|\omega\\| \leq \omega_N} |\hat{I}\_d(\omega)|^2 d\omega = \mathcal{S}\_{SR}(I_d).$$ ### Analysis: Denoising **Proposition:** In denoising, $I\_h$ contains richer semantic information than $I\_d$. **Justification:** Defining semantic richness as: $$\mathcal{S}\_{DN}(I) = \frac{\int\_{\Omega\_S} |\hat{I}(\omega)|^2 d\omega}{\int\_{\Omega\_N} |\hat{I}(\omega)|^2 d\omega},$$ where $\Omega\_S$ and $\Omega\_N$ represent signal and noise domains, and modeling noisy image as $I = I\_{clean} + \eta$. Signal components exhibit directional coherence while noise is isotropic. For appropriately chosen derivatives: $$\frac{\int\_{\Omega\_S} |\omega\_x|^{2i}|\omega\_y|^{2j}|\hat{I}\_{clean}(\omega)|^2 d\omega}{\int\_{\Omega\_N} |\omega\_x|^{2i}|\omega\_y|^{2j}|\hat{\eta}(\omega)|^2 d\omega} > \frac{\int\_{\Omega\_S} |\hat{I}\_{clean}(\omega)|^2 d\omega}{\int\_{\Omega\_N} |\hat{\eta}(\omega)|^2 d\omega}.$$ Therefore, $\mathcal{S}\_{DN}(I\_h) > \mathcal{S}\_{DN}(I\_d)$. ### Analysis: 3D Volume Segmentation **Proposition:** In 3D volume segmentation, $I\_h$ contains richer semantic information than $I\_d$. **Justification:** Defining semantic richness as: $$\mathcal{S}\_{VS}(I) = \sum\_{k=1}^K \alpha\_k \cdot \int\_{\Omega\_k} |\hat{I}(\omega)|^2 \cdot H(\hat{I}|\_{\Omega\_k}) d\omega,$$ where $H$ is entropy. Since higher-order derivatives enhance: 1. Boundary contrast: $|\hat{I}\_h^{i,j,l}(\omega\_{boundary})|^2 \gg |\hat{I}\_d(\omega\_{boundary})|^2$, 2. Directional information: $H(\hat{I}\_h^{i,j,l}|\_{\Omega\_k}) > H(\hat{I}\_d|\_{\Omega\_k})$, therefore, $\mathcal{S}\_{VS}(I\_h)>\mathcal{S}\_{VS}(I\_d)$. We sincerely appreciate your thorough review and insightful questions. If our analyses and theoretical justifications have addressed your concerns, **we would be grateful if you could consider adjusting your evaluation**. If questions remain, we welcome further discussion to address any issues. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns are addressed and I raise my rating to 3.
Summary: This paper proposes a novel architecture DVI that integrates high-order derivative information from implicit neural representations (INRs) into raster-based vision networks. The authors address the limitations of existing types of networks for vision tasks: a) raster-based methods lack semantic information due to the conversion process, and b) INR-based methods show limited performance. To resolve it, they extract high-order derivative features from INR and progressively fuse them into existing vision networks. As a result, DVI demonstrates superior performance for various vision tasks, compared to both raster-based and INR-based methods. Claims And Evidence: The authors demonstrate sufficient experimental results to support their claims including: - superior performance for five vision tasks (image super-resolution, image-denoising, 3D volume segmentation, video deblurring, and video optical flow estimation) - ablation studies in Section 5 that validate the impact of derivative maps and the technique to acquire derivative maps. Methods And Evaluation Criteria: The main idea of this method is the progressive fusion strategy to exploit semantic information from high-order derivatives for pre-existing raster-based networks to achieve better performance. Also, the authors introduce a recursive high-order derivative computation technique to reduce the computational cost compared to Autograd. Also, they provide sufficient evaluations on various vision tasks with corresponding metrics. Theoretical Claims: The authors provide a detailed computation process for efficiently acquiring high-order derivatives with clear proofs provided in supplementary material. Also, experimental evaluations confirm the effectiveness of the proposed method in terms of efficiency. Experimental Designs Or Analyses: Experiments on various tasks and detailed ablations successfully validate the effectiveness of this paper. Supplementary Material: I have read the supplementary material, especially Section C for the ablation studies in Section 5. Relation To Broader Scientific Literature: This paper clearly suggests the limitation of existing networks and claims the importance of fusing semantic information to raster-based vision networks, which has not been explored. Essential References Not Discussed: This paper claims the importance of exploiting semantic information for various vision tasks to achieve better performance. Thus, it would be helpful to compare this paper with diffusion-based image-restoration methods, such as DiffBIR [ECCV'24] and StableSR [IJCV'24]. Other Strengths And Weaknesses: **Strengths** - It shows strong generalization ability for various vision tasks. - The authors provide extensive experiments for validating the effectiveness. **Weaknesses** - As mentioned in the paper, the burden of computation costs should be resolved. - Also, the lack of comparison with diffusion-based method need to be supplemented. Other Comments Or Suggestions: - Several tables and figures in the manuscript are too small. It would be better to adjust the size of tables and figures. Questions For Authors: - Although a lot of diffusion-based models have been recently introduced, this paper does not mention any diffusion-based method. It could be helpful to include diffusion-based methods in the experiments due to their superior performance on various vision tasks. - Can you provide the amount of additional computational overhead of this method compared to baselines? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive feedback. We address your questions point by point below: ## Q1: Include diffusion-based methods for comparison We included **5 diffusion-based methods** for comparison across all vision tasks addressed in our paper. These **include the 2 algorithms you suggested** (StableSR [IJCV'24] and DiffBIR [ECCV'24]), as well as 3 additional algorithms (MedSegDiff-V2 [AAAI'24], VD-Diff [ECCV'24], and FlowDiffuser [CVPR'24]) to demonstrate the effectiveness of our method: **Table R1: Quantitative results (PSNR↑) for image super resolution task.** | Method | Set5 | Set14 | BSD100 | Urban100 | Manga109 | MAC(G) | Param(M) | |:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:----:|:------:| | StableSR | 30.09 | 27.25 | 25.34 | 23.18 | 26.81 | 12453 | 148 | | DVI(StableSR) | 31.58 | 31.12 | 27.06 | 25.12 | 28.97 | 12581 | 156 | **Table R2: Quantitative results (PSNR↑) for image denoising task.** | Method | Kodak24 | CBSD68 | McMaster | MAC(G) | Param(M) | |:--------------:|:-------:|:-------:|:-------:|:------:|:------:| | DiffBIR | 34.34 | 33.42 | 33.98 | 3596 | 379 | | DVI(DiffBIR) | 35.53 | 35.05 | 35.79 | 3690 | 385 | **Table R3: Quantitative results (DSC↑) for 3D volume segmentation task.** | Method | Mean | Spl | Rkid | Lkid | Gal | Liv | Sto | Aor | Pan | MAC(G) | Param(M) | |:--------------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:----:|:------:| | MedSegDiff-V2 | 75.79 | 86.35 | 85.31 | 87.25 | 48.39 | 89.55 | 73.40 | 75.36 | 60.67 | 1966 | 44 | | DVI(MedSegDiff-V2) | 85.46 | 77.63 | 87.03 | 94.23 | 75.36 | 82.31 | 82.53 | 95.02 | 89.58 | 2101 | 46 | **Table R4: Quantitative results (PSNR↑) for video deblurring task.** | Method | GoPro | MAC(G) | Param(M) | |:--------------:|:-------:|:------:|:------:| | VD-Diff | 28.23 | 236 | 12 | | DVI(VD-Diff) | 29.07 | 259 | 13 | **Table R5: Quantitative results (EPE↓) for video optical flow estimation task.** | Method | Sintel(final) | MAC(G) | Param(M) | |:-------------------:|:---------------:|:------:|:------:| | FlowDiffuser | 4.94 | 312 | 15 | | DVI(FlowDiffuser) | 3.92 | 341 | 16 | Our approach consistently **improves performance across all tasks with minimal overhead**: 1+ dB PSNR gain with only **1%** cost for super resolution and **3%** for denoising; nearly 10 percentage point DSC improvement for 3D segmentation with **7%** overhead; and significant performance gains for video deblurring and optical flow with just **9-10%** additional computation. ## Q2: Provide the computational overhead We provided the MAC (Multiply-Accumulate Operations) for each algorithm in Tables R1 to R5 in our previous response, as well as in Tables 1 to 4 in our initial submission. **For your convenience, we rearranged typical comparison results below**, showing both performance improvement and computational overhead for each comparison: **Table R6: Performance improvement and computational overhead on different vision tasks.** | Method/Dataset/Metric | Performance (Baseline → Ours [+Gain]) | Computational Cost (MAC in G) (Baseline → Ours [+Increase])| |:---------------------:|:--------------------------------------------:|:-------------------------------------------:| | StableSR/Manga109/PSNR↑ | 26.81 → 28.97 [**+2.16**] | 12453 → 12581 [**+128**] | | EDSR/Urban100/PSNR↑ | 23.49 → 24.71 [**+1.22**] | 1532 → 1681 [**+149**] | | DiffBIR/McMaster/PSNR↑ | 33.98 → 35.79 [**+1.81**] | 3596 → 3690 [**+94**] | | MedSegDiff-V2/Synapse/DSC↑ | 75.79 → 85.46 [**+9.67**] | 1966 → 2101 [**+135**] | | VNET/Synapse/DSC↑ | 72.62 → 83.60 [**+10.98**] | 31 → 43 [**+12**] | | VD-Diff/GoPro/PSNR↑ | 28.23 → 29.07 [**+0.84**] | 236 → 259 [**+23**] | | PVDNet/GoPro/PSNR↑ | 25.98 → 27.09 [**+1.11**] | 250 → 338 [**+88**] | | FlowDiffuser/Sintel/EPE↓ | 4.94 → 3.92 [**-1.02**] | 312 → 341 [**+29**] | | FlowFormer/Sintel/EPE↓ | 6.35 → 5.67 [**-0.68**] | 93 → 139 [**+46**] | This demonstrates that our approach consistently **improves performance across all vision tasks with minimal computational overhead**, making it practical for real-world applications. We achieve this efficiency primarily by reducing the complexity of high-order derivative computations through our MLP-specific derivative computation paradigm. Furthermore, the overhead from our attention-based feature processing can be further reduced using PyTorch 2.2.2's FlashAttention-V2, making our method even more lightweight in future implementations. We sincerely appreciate your thorough review and insightful questions. If our supplementary experiments and analyses have addressed your concerns, **we would be grateful if you could consider adjusting your evaluation**. If questions remain, we welcome further discussion to address any outstanding issues. --- Rebuttal Comment 1.1: Comment: The authors have resolved all my concerns regarding the diffusion-based model and computational costs. The experimental results indicate that it can also be effectively applied to diffusion-based models, including SOTA diffusion-based image restoration models StabeSR[IJCV’24] and DiffBIR[ECCV’24]. Moreover, they have demonstrated that the performance improvements are beneficial enough while incurring minimal computational overhead. For these reasons, I will raise my overall recommendation.
null
null
null
null
null
null
null
null
Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models
Accept (poster)
Summary: The paper introduced a novel method for decomposing language model activations into interpretable features that could replace SAE with matching pursuit. Thanks to its efficient and fast-converging matching pursuit algorithm (and not having to perform expensive training of NNs), this enables scalable learning about the features of an extremely large language model. While it's 10% short of the reconstruction performance of the SAE, it still suggests scalable, good approach that can be also transferred to other models. ## update after rebuttal Although this paper provides a highly efficient proof-of-concept alternative to the current SAE, I believe there is room for improvement that validates the actual usefulness of the proposed method. The main concern is given the limitation and skeptics of the validity of current SAEs, the proposed method is a weak, faster version of them, which raises even more doubts about the validity. Providing more concrete evidence of the validity and usefulness of the proposed method will make the paper stronger. Still, I have decided to increase the score to "Accept", given the potential of the method. Claims And Evidence: The paper claims ITDA is a faster, scalable alternative to SAEs for interpreting LLMs. It says ITDAs train 100-1000x faster, needing just 0.1-1% of the data—like 1 million tokens versus 16 billion for SAEs—and can handle huge models like Llama-3.1 70B and 405B in minutes on a single GPU. This checks out with examples like training 55 ITDAs on GPT-2 in under 2 hours versus 250 hours for SAEs. The efficiency and scalability feel solid, though exact times and hardware details would help. Methods And Evaluation Criteria: ITDA’s method builds a dictionary of activations greedily, using Matching Pursuit for sparse coding, an alternative to traditional SAE’s gradient descent. It makes sense for fast, interpretable decomposition, especially for big LLMs. They evaluate with cross-entropy loss for reconstruction, automated interpretability and sparse probing for features, and Jaccard similarity for model comparisons, all tested on datasets like the Pile and models from Pythia to Llama. Using established benchmarks like SAEBench keeps it grounded. My only gripe is the interpretability metrics feel vague; tighter criteria there would sharpen the story. Recently, there is a paper "Sparse Autoencoders Can Interpret Randomly Initialized Transformers" that questions the validity of autointerp, so maybe it's not too robust to use it to meausre the performance of ITDA. This is the fundamental problem shared by the literature (not only this paper), however. Theoretical Claims: N/A Experimental Designs Or Analyses: They train ITDAs on Pythia-70m’s residual stream, comparing cross-entropy loss scores to SAE variants like ReLU and TopK. The formula—measuring loss degradation against zero-ablation—is clear, and plots show ITDAs plateau at 1M tokens while SAEs improve with more. They also test ITDA’s Jaccard similarity against SVCCA and CKA, matching layers across model instances. Sample size (five instances) is reasonable, and results favor ITDA. And then they track ITDA similarity during training on Pythia 70m-410m, showing early layers stabilize first. No issues here. Supplementary Material: The supplementary material includes a code in "txt", which is quite hard to read or run. Relation To Broader Scientific Literature: The paper introduced a very big leap from the gradient-based SAE to a classical matching pursuit algorithm. Less reliance on learning the activations of the model using gradients helps the method to better generalize to other models. It would be interesting to extend it to more applications that are being studied in the literature like activation steering, safety, etc. as a future work to further see its validity and utility. Essential References Not Discussed: All relevant papers were cited Other Strengths And Weaknesses: The paper took a bold leap to replace SAE with a classical matching pursuit algorithm. This is very novel and its benefits (reduced training cost with a slight inference overhead and reasonable reconstruction loss) are impressive. Considering that this paper is one of the first papers to try this approach, this paper reasonably provides all the necessary, important insights already. As mentioned by the paper, ITDA maybe cannot fully replace the SAE due to its worse reconstruction loss, but it could be useful for way larger models by providing a rough insights into the model. However, at the same time, this raises a question on how useful ITDA actually is. Also, it seems the examples in the appendices show that the latent often activates for semantically different tokens sometimes. (Sequence 7008 Token 30. Nurse vs Question vs Fraction, etc.) Other Comments Or Suggestions: N/A Questions For Authors: Q1. How would this technique be used for activation steering? Would there be any difference to using SAE? Q2. What would be the effect of initializing the dictionary from a different dataset? There is a paper "Sparse Autoencoders Trained on the Same Data Learn Different Features" that shows that SAE is very sensitive to a random seed, which affects the weight initialization. Similarly, would initializing the dictionary with different activations lead to finding different features? Q3. What does "[the dictionary] is associated with a prompt and token index, rather than being learned from the activations of a specific model"? It shows up repeatedly across the paper, but it is not clear. Q4. "Sparse Autoencoders Can Interpret Randomly Initialized Transformers" shows that random and trained transformers produce similarly interpretable SAE latents. Would ITDA simiarly lead to this weird outcome or could it be more robust? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback and we appreciate that you feel this work is novel and “its benefits are impressive”. > The efficiency and scalability feel solid, though exact times and hardware details would help. We ran our experiments on a range of hardware so it is hard to provide precise summary statistics about resource usage. We will release a W&B project alongside the camera-ready paper that will include granular information about hardware and run-times. > My only gripe is the interpretability metrics feel vague; tighter criteria there would sharpen the story. Thanks for this. We have expanded the exposition and explanation of interpretability results, and include a description of each of the metrics that we hope makes it easier for the reader to quickly understand the results. However, these benchmarks are quite complex so we feel it is better to point the reader to the SAE Bench paper than try to provide a detailed explanation here. The explanations are similar to those in the SAE Bench paper, but we don’t have space to reproduce them here. > Recently, there is a paper "Sparse Autoencoders Can Interpret Randomly Initialized Transformers"... [also Q4] Thanks for highlighting this, SAE benchmarks are a new field and rapidly evolving so you’re right to question the validity of the results. The paper you reference found that SAEs trained on randomly initialised transformers learn single token features that are deemed to be interpretable. The same token in different contexts will likely have similar representations even in a random model, so it is not surprising that the SAE learns token features. These single token features are then clearly interpretable, as they only respond to a very specific context (presence of the token). This isn’t the case with ITDAs, however. We draw the reviewer's attention to the analysis of Latent 17002 in the Appendix, which is an example of multi token feature that activates on contexts relating to homework in contrast to the single token latents in that paper. On a subsample on 10k inputs taken from the Pile, we calculated on how many unique tokens each latent is active and plotted it here: https://imgur.com/mm3l6jE. > latent often activates for semantically different tokens sometimes… For these decompositions please bear in mind that the anchor text is only one part of interpreting the behaviour of the latent. While eg. "Nurse" may be the token activation that is used in the decomposition, it's possible that the contribution due to this token is a broader medical concept (which aligns with the context of the prompt) rather than a concept relating specifically to nurses. > Q1. How would this technique be used for activation steering? Would there be any difference to using SAE? Yes, it would be used in the same way as SAEs, i.e. modifying the latent activations at inference time. We’re unsure of the usefulness of SAE steering, however, so haven’t included any results relating to this in our paper. > Q2. What would be the effect of initializing the dictionary from a different dataset? … This is a good point, and one that we haven’t explored in our paper: we use The Pile throughout our experiments. ITDA dictionaries greedily select as we iterate over the dataset, so shuffling the dataset will likely lead to a substantially different decoder. We suspect data ordering will have some effect on the reconstruction and interpretability performance of ITDAs, and we would hope to explore this in follow-up work. > Q3. What does "[the dictionary] is associated with a prompt and token index, rather than being learned from the activations of a specific model"? It shows up repeatedly across the paper, but it is not clear. We agree that this wasn’t clear in the original paper, so we’ve added a system-level diagram (https://imgur.com/a/UIfzcdX) and some explainer text to clarify: “The elements in the dictionary can be viewed from two perspectives. For the purpose of decomposition using matching pursuit, they are absolute activation vectors taken from the model; this perspective is used when describing the algorithms for constructing ITDA dictionaries and for decomposing activations. Alternatively, the dictionary can be viewed as a collection of prompts and references to tokens in those prompts, in combination with part or all of a model. For example, a prompt may be ``The cat sat on the carpet.", the token reference is to the second token ``cat", and the partial model is the first 5 layers of GPT-2. The absolute activation of this element is the activation for the second token in the prompt after the 5th layer in GPT-2. We use this perspective when comparing dictionaries between models, as the prompt and token reference are model-agnostic.” > (Q4 addressed above) Thank you again for your thoughtful and constructive review—we hope that our clarifications and revisions have addressed your concerns, and would be grateful if you might consider updating your score accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I appreciate the clarifications and additional visualizations, especially the distinction between single-token and multi-token features. While the expanded explanations help, my core concern about the validity of the interpretability remains. I maintain my current score of Weak Accept.
Summary: The paper introduces Inference-Time Decomposition of Activations (ITDA) as a fast and scalable alternative to SAEs for interpreting LLM activations. ITDA constructs a dictionary of representative activations using matching pursuit, allowing it to be trained 100-1000× faster than SAEs with only 0.1-1% of the data, making it feasible for models as large as Llama 3.1 405B. Despite its efficiency, ITDA achieves 90% of the reconstruction performance of SAEs and performs comparably on interpretability benchmarks such as sparse probing and automated interpretability. Unlike SAEs, ITDA enables direct cross-model comparisons, leading to a new representation similarity metric that outperforms SVCCA and CKA in layer-matching tasks. The paper demonstrates ITDA’s potential for analyzing large-scale LLMs, tracking layer convergence, and identifying model behavioral shifts, making it a promising tool for mechanistic interpretability at scale. Claims And Evidence: The paper presents strong empirical evidence for most of its claims, particularly in demonstrating the efficiency and scalability of ITDA compared to SAEs. Methods And Evaluation Criteria: The proposed method, ITDA, is well-motivated as a scalable alternative to SAEs for LLM interpretability. The use of matching pursuit for sparse coding is reasonable, and the greedy dictionary construction aligns with the goal of efficient feature extraction. Evaluation is conducted on standard benchmarks, including SAEBench, sparse probing, and automated interpretability, which are appropriate for assessing interpretability performance. The representation similarity evaluation using layer-matching tasks follows established methods (SVCCA, CKA) and provides meaningful comparisons. Theoretical Claims: The paper does not contain formal proofs but relies on algorithmic descriptions and empirical validation. The theoretical foundation of ITDA, particularly its use of matching pursuit for sparse coding and greedy dictionary construction, aligns with established methods in dictionary learning. Experimental Designs Or Analyses: The experimental design is generally sound, with appropriate comparisons between ITDA and SAEs using SAEBench, sparse probing, and automated interpretability. The cross-entropy loss evaluation effectively measures reconstruction performance, and the layer-matching task provides a reasonable test for representation similarity. However, the comparison to SAEs is somewhat limited, as it primarily focuses on ReLU SAEs, while more advanced variants (e.g., TopK, P-Annealing SAEs) are only briefly discussed. Supplementary Material: The supplementary material includes Appendix A (SAE variants) and Appendix B (ITDA latent examples and decomposition case studies). Appendix A provides a detailed comparison of different SAE architectures, including ReLU, TopK, JumpReLU, and BatchTopK SAEs, which helps contextualize ITDA’s performance. Appendix B presents qualitative examples of ITDA latents, showing their activation distributions and corresponding text prompts. Relation To Broader Scientific Literature: The paper extends prior work on mechanistic interpretability and sparse dictionary learning, particularly addressing the computational limitations of SAEs. ITDA builds on classical dictionary learning methods by using matching pursuit for inference-time optimization, enabling efficient decomposition of LLM activations. It also introduces an ITDA-based Jaccard similarity measure, improving upon existing representation similarity metrics like SVCCA and CKA in layer-matching tasks. Additionally, ITDA aligns with recent efforts in model diffing by enabling cross-model comparisons without requiring gradient-based training. While it offers scalability advantages, further empirical comparisons with advanced SAEs would strengthen its positioning in the literature. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. ITDA is 100-1000x faster than SAEs, with only 0.1-1% data required. 2. ITDA enables representation similarity analysis, outperform SVCCA and CKA. Weakness: 1. No quantitative interpretability results for larger models. Other Comments Or Suggestions: Langage -> language in the abstract line 13 Questions For Authors: 1. The paper mentions that a lower loss threshold leads to better reconstruction but a larger dictionary. How does this affect interpretability performance? 2. The paper mentions negative activations but does not explore their meaning. Do they correspond to specific model behaviours? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments, we appreciate your recognition of the strong evidence for the “efficiency and scalability of ITDA in comparison to SAEs”. > However, the comparison to SAEs is somewhat limited, as it primarily focuses on ReLU SAEs, while more advanced variants (e.g., TopK, P-Annealing SAEs) are only briefly discussed. Thanks for raising this. To clarify, we did evaluate those variants as extensively as ReLU SAEs: In our experiments we compare against all SAE variants that have been evaluated by SAEBench. This includes ReLU, TopK, P-Annealing, Gated, and JumpReLU SAEs. ITDAs almost always have worse reconstruction performance than the best performing SAE variant. However, different SAE variants learn different kinds of features [1], and have different interpretability properties, so the SAE with the best reconstruction may not always be the best choice. As such, in our plots we show the SAE with the overall best reconstruction performance, and the best performing ReLU SAE. This provides a baseline for reconstruction performance without strawmanning SAEs by selecting badly optimised instances. [1] Hindupur, Sai Sumedh R., et al. "Projecting Assumptions: The Duality Between Sparse Autoencoders and Concept Geometry." arXiv preprint arXiv:2503.01822 (2025). [2] https://www.saebench.xyz/ > No quantitative interpretability results for larger models. One of the major advantages of ITDAs is that we were able to train them on large models like Llama 70B and 405B on widely available hardware, unlike SAEs which require considerably more computational resources. However, this means that SAEBench does not support (or currently need to support) multiple GPUs, which would be necessary for Llama 405B. Furthermore, there would be no publicly available SAEs with which to compare the results. We could run experiments on Gemma 2 9B, as there are open source SAEs for this model, if you think this would considerably strengthen the paper (but this would probably not be possible during the discussion phase). > The paper mentions that a lower loss threshold leads to better reconstruction but a larger dictionary. How does this affect interpretability performance? Increasing the dictionary size from 4k to 16k on Pythia models, and from 4k to 64k on Gemma 2 2B, results in a modest improvement on autointerp and sparse probing benchmarks. We have now included an appendix section going into considerably more detail on these interpretability benchmarks, for a range of dictionary sizes and L0s. > The paper mentions negative activations but does not explore their meaning. Do they correspond to specific model behaviours? This is a good question and one that we’ve not extensively investigated. Zero-ablating negative latent activations impacts reconstruction performance, but we don’t have a good understanding of how negative latent activations affect model behaviour. If we interpret strong positive activations as meaning an input is strongly related to a feature, then strong negative activations could mean an input is strongly unrelated to a feature, which is hard to validate. We emphasize the rarity of strong negative activations, however. Our appendix latent examples were cherry-picked to include strong negative activations, but they are rare (<0.001%) in general. SAE latents frequently have small positive activations, so we are not particularly worried about the small positive or negative activations in ITDAs. We hope this addresses your questions and we are keen to improve your faith in, and support of, this paper. Please let us know if you have any further questions or concerns. If these clarifications have addressed your concerns, we would be grateful if you might consider revisiting your overall score. --- Rebuttal Comment 1.1: Comment: Thank you for providing responses to my concerns. After reading your response, I decide to keep my score.
Summary: The main idea of this paper is to apply a dictionary learning approach to the problem of finding sparse representations of activation. ITDA builds the dictionary at inference time. The algorithm works by first trying to reconstruct the activation from the atoms in the dictionary. Reconstruction is done by Matching Pursuit (thus scales at least linearly in dictionary size). If the reconstruction is possible with a sufficiently small number of atoms (a hyperparameter) the latent representation is accepted, if not, the $\mathbf{x}$ is added to the dictionary. In this way, as we repeatedly iterating and updating the dictionary during inference time. Claims And Evidence: The authors provide some experiments of their approach. In Section 3 they report some loss as a function of $\ell_0$ norm, which makes sense and is consistent with my expectations. They also provide SAE benchmarks, and report some positive scores in the text, but I am not aware of the validity of these benchmarks. In Section 4 they also consider representation similarity, which is an interesting new topic. I didn't fully understand the claims they were making in this section, or what the implications are. Methods And Evaluation Criteria: I think there needs to be more explanation of the experiments. Specifically, Section 4 is a bit confusing to me. This might be because of other misunderstandings I have. Theoretical Claims: No theoretical claims are being made. Experimental Designs Or Analyses: See methods and Evaluations. Supplementary Material: Unfortunately, I did not have time to evaluate the supplementary material. Relation To Broader Scientific Literature: ITDA is, as far as I can tell, a pretty unique way to learn an SAE, and some approach like this might be faster that traditional approaches. Essential References Not Discussed: . Other Strengths And Weaknesses: Overall the idea of applying iterative dictionary learning tools for learning SAEs is a good idea, however, I find that the paper is inconsistent in its language, and I think this work would benefit from more time spent on the presentation and evaluation. Other Comments Or Suggestions: There is interchanging use of terms like "atom", "token" and "activation". The authors need to be more precise here. This paper could also benefit from a system-level diagram? You should define what $\mathbf{D} \cup \\{ x\\}$ means. Can you be a bit more rigorous when defining *everything*. For example, what is the CE Loss with respect to? You don't need to go over basic definitions of course, but you need to provide enough information that I can quickly figure out what you are doing. Questions For Authors: What is the difference between $x$ and $\mathbf{x}$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your feedback and suggestions, we’re glad you like the idea and are keen to improve the presentation. > In Section 4 they also consider representation similarity, which is an interesting new topic. I didn't fully understand the claims they were making in this section, or what the implications are. We claim that the Jaccard Similarity between two ITDA dictionaries is a simple, performant, and state-of-the-art measure of representation similarity and outperforms existing methods on a layer matching task on GPT-2 model variants. We propose that this approach opens up exciting research directions in finding differences between models, as we can take the difference between ITDAs as well as their intersection. For two models with pre-trained ITDAs with dictionary D0 and D1 sizes n and m, our method allows for measuring similarity in O(n + m) time. In comparison, SVCCA and CKA require learning a map between representation spaces, and the relative representation measure due to [1] requires comparing computing a dataset of activations and comparing to a set of anchor points. Furthermore, that the intersection of D0 and D1 accurately tracks model similarity suggests that the difference between D0 and D1 accurately tracks differences between models. This opens exciting research directions in “model diffing”, where the goal is to find differences between models, for example before and after fine-tuning. > I think there needs to be more explanation of the experiments. Specifically, Section 4 is a bit confusing to me. Thanks for this; we have added more explanation of experiments, particularly in section 4, and we feel this has strengthened the paper and made it more readable for a general audience. Here is a summary of the changes as the full updated section is too long to reproduce here: Further explanation of the significance of the results and comparison to additional methods Greater explanation of the layer matching task, including a formula for the calculation of the scores to remove all ambiguity Moved the layer convergence experiments to the appendix as they did not directly evidence the value of ITDA here Added a model-level matching task to emphasize the usefulness of ITDAs for differentiating between models. > There is interchanging use of terms like "atom", "token" and "activation". The authors need to be more precise here. This paper could also benefit from a system-level diagram? Thanks for raising this. One of the primary advantages to ITDAs is that concepts like “atom”, “token”, and “activation” are actually interchangeable: ITDA dictionaries atoms are activations, but they’re also token references. We appreciate that this can be confusing though, so we’ve added a system-level diagram (https://imgur.com/a/UIfzcdX) and some explainer text to clarify: “The elements in the dictionary can be viewed from two perspectives. For the purpose of decomposition using matching pursuit, they are absolute activation vectors taken from the model; this perspective is used when describing the algorithms for constructing ITDA dictionaries and for decomposing activations. Alternatively, the dictionary can be viewed as a collection of prompts and references to tokens in those prompts, in combination with part or all of a model. For example, a prompt may be "The cat sat on the carpet.", the token reference is to the second token "cat", and the partial model is the first 5 layers of GPT-2. The absolute activation of this element is the activation for the second token in the prompt after the 5th layer in GPT-2. We use this perspective when comparing dictionaries between models, as the prompt and token reference are model-agnostic.” Our work draws on research from the fields of sparse dictionary learning, mechanistic interpretability, and representation similarity. Each of these fields uses its own terminology, and we’ve stuck to those conventions. For example, it doesn’t make sense to refer to activations when discussing representation literature. We’ve added a glossary to the appendix to help the reader (but don’t have enough characters remaining to reproduce it here). > Can you be a bit more rigorous when defining everything. Yes, absolutely. In particular, we have updated the methodology in section 3, including the CE score formula and “D ∪ {x}”, and improved the explanation of the experiments in section 4. > What is the difference between… These are the same and this was a formatting mistake - we have corrected this, thanks for pointing this out. We again thank the reviewer for their helpful comments, and hope our changes are satisfying. Regrettably the character limit of this response means that we cannot reproduce the changes to the paper here, but we are happy to do so during the discussion period. If these changes have been satisfying, we politely ask the reviewer to reconsider their score. [1] Moschella, Luca. "Latent communication in artificial neural networks." ICLR (2024). --- Rebuttal Comment 1.1: Comment: Some of the information included in this rebuttal goes a long way towards understanding the ITDA. The authors certainly should include something like what they included in rebuttal (excerpt below) in the main paper: > “The elements in the dictionary ... reference are model-agnostic.” I see other reviewers also had some confusion related to this e.g. Q3 of 5pSy I disagree with your decision to use "atom", "token" and "activation" interchangeably depending on the context when discussing elements of the dictionary. It is confusing, and in my opinion, mathematically incorrect. They are **not** the same thing. **Acceptable options for presentation**: 1. **Atoms are activations**. Each activation happens to correspond to a token from a prompt, but these are not atoms themselves. You could **call the prompts/tokens "interpretable labels" of your atoms**, akin to SAE literature. Then you can explain how the set of labels can be used to construct new dictionaries for different LLMs as desired. This will fall more neatly into a linear dictionary learning framework. 2. If you **insist on referring to promts/tokens as atoms**, you should **define your (nonlinear) forward model** appropriately. Something like: $$\hat{\mathbf{x}} = \sum_{i} a_i f_{\ell}(\mathbf{d_i}),$$ where $f_{\ell}$ is an is the "partial LLM" that you mentioned in the system diagram. In practice, $f_{\ell}(\mathbf{d_i})$ would be precomputed, so everything is still linear. With these clarifications in mind, I did a second read of this paper, and I feel that any misunderstandings I had are primarily due to the way ideas are presented, and not lack of care on my part. For example, statements like this cause confusion: > ITDA dictionaries consist of prompts and token indices, rather than learned atoms, You also "learn" atoms, just in an online fashion. Secondly, as I mentioned above, I think this is a misuse of the term atom. *System Diagram*: This is a good start, but I think it would be good to also include how you can "transfer" dictionary from one LLM to the another using the label prompts. **I stand by my evaluation of this paper. After clarification from the authors, I am even more confident in my assessment that the presentation is poor. While it is possible that some of these issues could be addressed in the camera ready version, I believe there is too much work to do, and the submitted version falls below an acceptable level of rigor for ICML. A re-evaluation after significant revisions is warranted, thus I will maintain my score.** --- Reply to Comment 1.1.1: Comment: Thank you for your response. We appreciate that you found the additional information in the rebuttal helpful in understanding our method, as well as your earlier comments about the technical merit of the paper. We think your first option for improving the presentation of the paper is good, and we will include those changes in future versions of the paper.
Summary: The paper proposes a new algorithm for mechanistic interpretability as an alternative to sparse autoencoders. Their algorithm iteratively identifies new token activations to add as dictionary items based on their similarity to the current dictionary. If the similarity is too low (i.e. reconstruction through the dictionary is not accurate enough), the (contextualized) token is added as a new item to the dictionary. Hence, the dictionary items are effectively identifiable from token indices into a prompt. This allows the construction of a new representation similarity measure based on the Jaccard index computed over two dictionaries. While the reconstruction performance of the proposed model is worse, it performs on par with ReLU-SAEs on interpretability benchmarks, but worse than the state-of-the-art Top-K SAEs. However, the proposed method is significantly cheaper to train than SAEs, making it applicable to large open source models with hundreds of billions of parameters. Finally, on a model layer similarity measurement task, the proposed method outperforms previous methods. Claims And Evidence: The support for the claims made is generally convincing. There is also support for the claim that the proposed method is a lot more efficient than SAEs. However, I would like to see a more thorough investigation of the relation between training data size and other hyperparameters of the proposed method with the training time, in comparison to SAEs. Methods And Evaluation Criteria: The evaluation of interpretability seems to rely on a recent benchmark and compares to state-of-the-art models. The baselines for the model instance layer similarity task seem quite old (2017 & 2019), but I am unsure if a more recent one (Lan et al., 2024) is applicable in this case. The authors should clarify this. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I checked the soundness of the experimental designs and they appear solid. Supplementary Material: I briefly looked at some decomposition examples and the MP algorithm, which look fine and were quite helpful. However, since there is significant space left (1 page), it could also be moved to the main body to make the paper more self-contained. Relation To Broader Scientific Literature: The key contributions are related to the mechanistic interpretability literature, specifically the training of sparse autoencoders (SAEs) for large language models. SAEs are extremely expensive to train for large models and therefore investigations are often limited to small models such as GPT-2. The proposed method is reported to be several orders of magnitude faster to train while achieving similar results to some earlier SAEs. Given that the proposed architecture differs significantly from SAEs, it is plausible that there is a lot of room for improvement from future work. Hence, this work opens up meaningful new research directions. Essential References Not Discussed: The discussion of related literature in this paper is substantial. Other Strengths And Weaknesses: The paper uses only 7 out of 8 available pages and thus there is significant room for improvement. For example, some of the graphs from the appendix could be moved to the main body. I would also like to see an investigation of how the choice of prompts etc. influences the interpretability and representation similarity results. Other Comments Or Suggestions: line 389: "an efficient alternative" Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive and thoughtful feedback. In particular we appreciate your recognition of the evidence for the claim that this approach is “a lot more efficient than SAEs” and that you feel this work “opens up meaningful new research directions”. > However, I would like to see a more thorough investigation of the relation between training data size and other hyperparameters of the proposed method with the training time, in comparison to SAEs. We agree this is an important avenue for investigation. In our experiments, the ITDAs were trained on 1.28 million tokens, while the SAEs used for benchmarking ranged from 500 million to 16 billion tokens. This offers insight into the performance of ITDAs on low data settings across target models. A comprehensive exploration of this relationship, however, would require training new SAEs, a process that would take weeks for models like Gemma 2, which unfortunately exceeds our timeframe for revisions. Nevertheless, we will clearly highlight this limitation in our revised manuscript and suggest it as a critical area for future research. > The baselines for the model instance layer similarity task seem quite old (2017 & 2019), but I am unsure if a more recent one (Lan et al., 2024) is applicable in this case. The authors should clarify this. This is true, while CKA and SVCCA are standard approaches, they are not recent. Lan et al. proposes applying CKA to SAE decoder matrices as a measure of representation similarity, however applying this approach would require training 170 SAEs on the LLMs which would take 1-2 months of GPU time. Instead, we have added a more recent relative representation measure due to Moschella, Luca. "Latent communication in artificial neural networks." ICLR (2024). This measure also significantly outperforms SVCCA and CKA, but not our ITDA IoU method. Metric GPT-2 Small GPT-2 Medium Linear Regression (baseline) 0.16 0.07 SVCCA (Raghu et al., 2017) 0.50 0.44 Linear CKA (Kornblith et al., 2019) 0.69 0.61 Relative (Moschella et al., 2022) 0.87 0.78 ITDA (ours) 0.88 0.89 Note that for better reproducibility we have replaced our self-trained Pythia instances with two sets of public GPT-2 model instances of different sizes. > However, since there is significant space left (1 page), it could also be moved to the main body to make the paper more self-contained. Following feedback from reviewers, we have expanded the main body with a system-level diagram and more explanation of the method and experiments. Consequently, there is now no space to move the examples and algorithm to the main body of the paper. We chose to prioritise these clarifications to help improve the reader’s understanding of the core methodology. If there are specific supplementary plots or analyses you believe would significantly strengthen the main body, please let us know—we will explore adjustments accordingly. > I would also like to see an investigation of how the choice of prompts etc. influences the interpretability and representation similarity results. This would be an interesting direction for further exploration. SAEs are highly susceptible to shifts in the distribution of their training data, so it seems likely this is also the case for ITDAs. The interpretability pipeline is time-consuming to run, so we won’t be able to get these results during the review period. However, we think these experiments would meaningfully strengthen the paper and will add them to a future version. > line 389: "an efficient alternative" Good spot, thanks! We hope that this has addressed your concerns with the paper, and if so, would ask you to consider increasing your support of this paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my favorable score.
null
null
null
null
null
null
Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All
Accept (oral)
Summary: This paper focuses on iterative combinatorial auctions (ICAs), aiming to tackle the issue of exponential bundle space growth in combinatorial auctions. The authors introduce a machine learning (ML) algorithm that utilizes information from both demand queries (DQs) and value queries (VQs) and present the ML-powered Hybrid Combinatorial Auction (MLHCA). MLHCA combines the advantages of DQs and VQs. DQs are more effective in the early auction stage as they are easier for bidders to answer and can provide global information about bidder preferences. VQs are more beneficial in the later rounds as they can precisely capture bidder values. By starting with DQs and then transitioning to VQs, MLHCA can achieve better learning performance. The paper provides a theoretical framework for combining DQs and VQs, analyzes their advantages and limitations, and presents an algorithm. Experimental results show that MLHCA outperforms previous sota mechanisms in terms of efficiency and speed of convergence. Additionally, MLHCA is compatible with various payment and activity rules and can detect inconsistent misreports. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. No issues have been found so far. Experimental Designs Or Analyses: Yes. No issues have been found so far. Supplementary Material: Yes. The code part. Relation To Broader Scientific Literature: The paper's key contributions significantly advance the fields of combinatorial auctions and machine learning. It improves upon traditional iterative combinatorial auctions (ICAs) like CCA, offering a more efficient and less cognitively burdensome alternative in MLHCA. In preference elicitation, the combination of DQs and VQs provides a new approach, enhancing learning performance. Regarding machine learning in auctions, the use of MVNNs and the mixed training algorithm add to the existing knowledge. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. High Efficiency: MLHCA significantly outperforms previous SOTA mechanisms, achieving high efficiency with fewer queries. 2. Reduced Cognitive Load: Combining DQs and VQs lessens bidders' cognitive load. 3. Theoretical and Practical Support: It has a solid theoretical framework and is compatible with practical rules. 4. Smooth Transition between DQs and VQs: I particularly appreciate the transition from DQs to VQs in MLHCA, which effectively combines the two types of queries, leveraging the benefits of each. The bridge bid, a specialized VQ, ensures a seamless transition, preventing potential efficiency drops and enabling the auction to maintain high performance throughout the process. This well-designed transition plays a crucial role here. Other Comments Or Suggestions: No. Questions For Authors: As a reviewer, I'm concerned about the complexity of the MLHCA algorithm. However, I didn't find relevant analysis in the paper and its appendices. There are suspicions that some steps in the algorithm may have relatively high complexity. I hope the authors can offer a simple analysis of the algorithm's time and space complexity in the rebuttal, considering real-world scenarios to show its impact on practical performance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive feedback! If you have any additional questions, please let us know. “I am concerned about the complexity of the MLHCA algorithm”: In terms of theoretical time complexity, our ML-CCA, as every other ICA - including the CCA and all ML-powered auctions discussed - are NP-hard. That is because they need to solve the winner determination problem, which is an NP-hard combinatorial optimization problem. However, in practice the computational costs are very manageable, as discussed in Appendix G.4. Even in MRVM, the most computationally intensive and realistically-sized domain, the total time required to run all 100 rounds using 16 GB of RAM and 8 physical cores (clocked at 2.20GHz) was approximately 7 days, so all the necessary computations for 1 round take below 2 hours. This is not an issue, given that in the real-world, at most 2 clock rounds happen per day. For a real-world instance, just like for MRVM, the total computational cost in a paid cluster service for running our mechanism would be below 5 USD. Our efficiency improvements suggest that the welfare gains of doing so, compared to the CCA, would be approximately 100 million USD. For the other domains the computational costs are even lower. For the experimental results presented in this paper, we had to incur over 100 times the computational costs of a real-world instance. With that being said, we did put effort into reducing the computational cost of our algorithm. For example, we have empirically observed that our mixed query training algorithm needs approximately 10 times more epochs to converge than the DQ-only training algorithm from [Soumalias et al., 2024c]. From an applied spectrum-auction perspective, increasing the computational costs by a factor 10 does not matter, because this can easily be parallelized. However, for our academic budget when running hundreds of auctions a factor of 10 matters a lot. This is why we had to come up with the technique described in Remark E.1, to reduce computational time approximately by a factor 10.
Summary: This paper studies Iterative Combinatorial Auctions (ICA) and proposes a novel ML-based ICA mechanism, MLHCA. Their key empirical finding is that the mixed use of Value Queries (VQ) and Demand Queries (DQ) in their ML-based ICA mechanism significantly improves the efficiency and convergence of ICA compared to prior works. Additionally, they complement this finding with theoretical observations highlighting the disadvantages of using VQ or DQ alone. Claims And Evidence: The core claim of this paper is that a mixed use of value queries (VQ) and demand queries (DQ) in iterative combinatorial auction (ICA) can significantly improve the efficiency and convergence. This claim is primarily supported by experiments, with some (weak) theoretical observations on the disadvantages of using VQ or DQ alone. Methods And Evaluation Criteria: They use a standard benchmark, the Spectrum Auction Test Suite (SATS, Weiss et al., 2017), which is a sensible choice and aligns with prior works cited in this paper. Theoretical Claims: I found no issue with theorems proven in this paper. Experimental Designs Or Analyses: Their statistical testing method appears sound and standard, and the use of a standard benchmark enhances the transparency of their results. Given that their improvement (i.e., smaller efficiency loss) is substantial across all four benchmarks, we have no objections to their findings or statistical analysis. Supplementary Material: The code in the supplementary material was lightly inspected, and it appears to align with the main text. Relation To Broader Scientific Literature: This paper contributes to the growing literature on ML-based ICAs by providing insights into the mixed use of value queries (VQ) and demand queries (DQ). While previous ML-based ICAs primarily relied on a single query type such as VQ-based approaches in Weissteiner et al., 2023 or DQ-based methods in Soumalias et al., 2024c, this work demonstrates that the hybrid approach significantly improves the efficiency and convergence of ICAs while minimizing bidder cognitive load. Essential References Not Discussed: All essential references are discussed to the best of my knowledge. Other Strengths And Weaknesses: **Strengths** - This paper presents a novel ML-based ICA mechanism that significantly improves the current state-of-the-art. - The main idea of mixing VQ with DQ has the potential for broader impact and deserves a further study. **Weakness** - The theoretical argument is somewhat weak, as it only highlights the disadvantages of using VQ or DQ alone but does not provide a provable positive result for their mixed use, even in a simple toy model. Other Comments Or Suggestions: I have no other comments. Questions For Authors: Is there any hope of strengthening ex-post Nash incentive compatibility (i.e., truthful bidding is an ex-post Nash equilibrium) into dominant strategy incentive compatibility (i.e., truthful bidding is a dominant strategy), possibly under certain assumptions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback! If you have any more questions, please let us know. “Does not provide a provable positive result for their mixed use, even in a simple toy model.” Lemmata D.9 and D.14, and Example 1 provide positive results, but we agree that our experimental positive results are stronger than our theoretical results. In Appendix D.3 we provide Example 1, which we believe provides very good intuition both on why DQs alone can be ineffective, and on why combining both query types can substantially increase efficiency. “Is there any hope of strengthening ex-post Nash incentive compatibility into dominant strategy incentive compatibility, possibly under certain assumptions”? Under extremely strong assumptions, namely that our auction can request an exponential number of value queries from the agents, it is trivial to show that it satisfies DSIC. Additionally, in appendix B.5, we explain how MLHCA can automatically detect if a bidder’s reports are inconsistent with any valuation function (and automatically exclude them from the auction. This restricts a bidder's misreport space to consistent value functions. For more mild assumptions however, proving DSIC becomes challenging. Please note that DSIC has not been proven neither for the CCA, the most established ICA in the real world, nor for any of the ML-powered ICAs in the literature that we are aware of. However, in appendix B, we discuss in detail the incentive properties of our MLHCA, and we provide strong theoretical hints suggesting that MLHCA is more robust to strategic misreports than both the CCA, and the other ML-powered ICAs discussed in this paper. Please also see our reply to reviewer U2sY. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response. After consideration, I will stand by my original score.
Summary: The paper introduces a new auction method called the Machine Learning-powered Hybrid Combinatorial Auction (MLHCA), designed to improve how items are sold in complex auctions where bidders can place offers on combinations of items. In these auctions, figuring out the best combination of bids is difficult because the number of possible combinations grows very quickly as more items are added. Traditional auctions either use demand queries (DQs) (where bidders state which combination they prefer at a certain price) or value queries (VQs) (where bidders say how much they value a specific combination). MLHCA combines both types of queries, starting with DQs to quickly gather general information and then switching to VQs to fine-tune the final outcome. The paper introduces a "bridge bid" to make the switch between DQs and VQs smooth. The paper provides a few theoretical insights into the failure of DQ-only or VQ-only mechanisms and provides a new mechanism to mitigate those issues. The paper then conducts extensive experiments on standard datasets to validate the proposed mechanism on the standard datasets and shows that it outperforms existing models. Claims And Evidence: Most of the proposed claims are well supported. While the experiments clearly show the superior performance of the proposed mechanism which is the highlight of the submission, some of the theoretical insights in the paper are unclear and inadequate. Please see the weaknesses section for more details. Methods And Evaluation Criteria: The authors test MLHCA on realistic datasets from the spectrum auction test suite (SATS), which simulates different auction environments. They measure success using efficiency loss (how close the final allocation is to the optimal one) and the number of queries required to reach that outcome. MLHCA’s ability to reduce efficiency loss by up to a factor of 10 and cut down the number of queries by up to 58% demonstrates its practical value. Theoretical Claims: I checked all the proofs presented in the paper and they seem fine. Experimental Designs Or Analyses: The experiment design and analysis are correct to the best of my knowledge. Supplementary Material: I checked most of the supplementary material and went over the omitted proofs from the main paper. Relation To Broader Scientific Literature: The proposed two-stage mechanism overcomes several shortcomings of the DQ-only or VQ-only ML-powered mechanisms in the existing literature. The notion of the bridge bid introduced in this work also indicates that (perhaps) instead of the traditional DQs and VQs, future research in this area can focus on designing better queries or newer methods for combining DQs and VQs to elicit more information from the bidders instead of more elaborate NN models. The key contribution of the paper is the mechanism in Algorithm 1 which leverages several existing methods and modules (MixedTraining (a straightforward extension of the TrainOnDQs module of [SWHS24] which uses both DQs and VQs instead of only DQs) and NextPrice [SWHS24]). A considerable portion of the theoretical contribution leverages the results from the MLCA paper [BLS19] and ML-CCA paper [SWHS24] which is cited in the submission. [BLS19] Brero, Gianluca, Benjamin Lubin, and Sven Seuken. "Machine learning-powered iterative combinatorial auctions." arXiv preprint arXiv:1911.08042 (2019). [SWHS24] Soumalias, Ermis Nikiforos, et al. "Machine learning-powered combinatorial clock auction." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024. Essential References Not Discussed: All relevant works are discussed in the paper to the best of my knowledge. Other Strengths And Weaknesses: A: **Strengths:** The experiments are well designed and extensive which substantiate the superior performance of the proposed mechanism. The notion of bridge bid is novel conceptually and its effect in practice is also highlighted well. **Weakness:** The assumptions on the value function are not clear. A crucial aspect in any combinatorial auctions is the structure of the value function. But the authors do not explicitly mention it in the work (Line 121 suggests it is non-negative). By reading the whole paper including the supplementary material, unless I am missing something, the only assumption on the value function is that it is non-negative and monotone. For the rest of my comments, I will assume this. In any case, the authors need to clarify this in the beginning. **Demand Queries vs Value Queries.** A common theme which is repeated several times in the paper is that the DQs are cognitively simpler than VQs for the bidder. While it is well known that DQs are more informative than VQs in terms of eliciting preferences (which is also evident from the experiments), from a computational point of view, DQs can be quite hard to evaluate especially if the value function is not ‘simple’. Even for submodular value functions, this is known to be NP-Hard [FV10]. Furthermore, there are several experimental studies which suggest that it is hard for participants to optimally respond to a DQ [SZB12, BSW13]. Even the authors argue in Remark E.1 that evaluating a DQ with the MVNN approximating the value function is time consuming for which they reuse the values. So, how is responding to DQs cognitively simpler? In terms of the supermarket example which is used in the paper to provide intuition: if the value function is not ‘nice’ (linear/all-or-nothing etc), then for a given set of prices, to know the optimal combination of frying pans and coconuts, the bidder needs to evaluate the value of x frying pans and y coconuts in the first place which is equivalent to a VQ. [FV10] Feige, Uriel, and Jan Vondrák. "The submodular welfare problem with demand queries." Theory of Computing 6.1 (2010): 247-290. [SZB12] Scheel, Tobias, Georg Ziegler, and Martin Bichler. 2012. “On the Impact of Cognitive Limits in Combinatorial Auctions: An Experimental Study in the Context of Spectrum Auction Design.” Experimental Economics, 15: 667–692. [BSW13] Bichler, Martin, Pasha Shabalin, and Jürgen Wolf. 2013. “Do Core-selecting Combinatorial Clock Auctions Always Lead to High Efficiency? An Experimental Analysis of Spectrum Auction Designs.” Experimental Economics, 16(4): 511–545. Other Comments Or Suggestions: A: **Typos:** Line 89, 2nd column: the word ‘preference’ is present twice. Line 197, 1st column: Proofs are deferred to Appendix D. Line 290, 1st column: An extra ‘R’ is present in the Algorithm 1. Line 668, 669: Use \textsc{} for ‘NextQueries’ to be consistent. Line 692, eq (5): What’s the significance of ‘1’ in \stackrel{}{}? Line 843: Use \mathcal{X} to represent the class of feasible bundles. Line 1012, 1013: Typo in word ‘appendix’. Line 1017: MVNNscan -> MVNNs can. Line 1162: inAppendix E.2 -> in Appendix E.2 Line 1440: Typo in word ‘optimization’. Line 1541: … we showed… instead of …we will show… Line 1855: Missing full stop. **Suggestions:** A key aspect of the paper is combining the DQs and VQs and as the authors argue in Line 419-423 (2nd column), the way in which it is combined is important (which is also one of the key contributions of the work). The paper would have been more compelling if the authors had tried combining the DQs and VQs in other ways (VQs first, then DQs or interleaving them and so on), either theoretically or experimentally. That which would have highlighted the significance of the current method of combining them both. As a lot of modules and subroutines in Algorithm 1 are from existing literature, it would help the paper if the authors precisely highlighted their novel contributions in designing the mechanism. Questions For Authors: A: In Lemma B.6, If L(theta)=0, why does it mean that the predicted best response matches the reported one? Am I correct to understand that you are assuming the optimization problem has a unique optimal value? If so, why is that the case? Moreover, the proof handles the DQ phase and VQ phase separately and obtains a contradiction under each setting. So, is it true that DQ-only methods such as ML-CCA can also detect inconsistent misreports? In Proposition C.5, do MVNNs always learn an (almost) linear approximation of the value function or is it true only for the given example? More broadly, what is the importance of this result? If MVNNs always learn an (almost) linear approximation of the value function, the paper would be strengthened if the authors also consider a simple linear model to approximate the value function as a baseline to compare it against the more sophisticated MVNNs. What is the significance of the 55% efficiency in Theorem 3.2? The proof only presents an example under which this efficiency can not be achieved. If I chose some other number, say 25%, does a DQ-only auction always guarantee 25% efficiency? I guess with appropriate tweaking of the parameters of the example, one can show that the previous statement is not true. So, why highlight the 55% figure? At a high level, I would suggest presenting this as an Observation/Fact instead of the full-fledged Theorem. **Minor comments:** What is the full form of DWP in footnote 15 in Line 1263? A minor comment but what is the reasoning behind the three words ‘Prices, Bids, Value’ in the paper title? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your very detailed review! If you have more questions, please let us know. “The assumptions on the value function are unclear” We only assume that $v(0) = 0$ and monotonicity, which combined imply non-negativity. Both of those assumptions are well-motivated and fairly standard. In Appendix C.1 we reference theoretical results that MVNNs can represent any monotonic function with $v(0)=0.$ We will clarify this in the final revision. “Combining DQs and VQs in other ways” In Appendices D and E we provide a lot of intuition on why starting with DQs and then querying VQs is expected to be much better than the other way around. Initializing an auction with random VQs (there are no better initialization algorithms for ML-powered VQs in the literature) will elicit the values of bundles which are far from optimal for the bidders, providing very little actionable information. The DQs at the end will be ineffective for two reasons: first, they will not provide any further information once the concave envelope of the value function is learned. Second, it is very likely that they will not result in reported bundles which nicely fit together (Example 1 provides good intuition for this). Therefore, we have good reasons to believe that swapping the order would significantly worsen the performance. Finally, one additional argument for this query order is that it results in the same interaction paradigm as the established CCA. We will emphasize these arguments more in the final revision. “Is responding to DQs simpler than VQs?” While your argument on theoretical time complexity is correct, in practice, a bidder does not need to exactly evaluate her utility to answer a DQ. To make the supermarket example more intuitive: Suppose you go to the supermarket to buy a bundle of food items. Then you can easily disregard most other items, e.g., all the electronic items in the store. You only need to know that an electronic item’s added value for you is lower than its price tag. Reporting your precise value for a random bundle containing some food items, an iPad and a charger is much harder than deciding that you don’t want to buy the electronic items given their prices. Based on the experience of real-world consultants in spectrum auctions, there are usually many spectrum licenses that a bidder can easily disregard when answering a DQ, knowing that her values for them are below the posted prices without knowing how much below. In contrast to [SZB12], real-world bidders have much more expertise on the values of bundles that fit into their business model than the values of some random bundle. We agree that VQs for good bundles are easy to answer, but this is not true for random bundles in the real world. “A DQ with an MVNN approximating the value function is time consuming” Using 8 cores, it takes on average less than 200 ms. This does not relate to the human complexity for a bidder. “Does Lemma B.6 require a unique optimal solution?” Lemma B.6 does not require a unique optimal solution. Instead, all optimal solutions have the same predicted utility for the user, defined as <predicted bundle value> - <bundle price when the bidder requested it>. “Is it true that DQ-only methods such as ML-CCA can also detect inconsistent misreports?” Yes, we will mention this implication of our result in the final revision. “What is the meaning of proposition C.5? Do MVNNs always learn an almost linear approximation of the true value function?” MVNNs can fit queries that cannot be explained by a linear model (once those queries appear in the auction), which gives them a big competitive advantage over linear models. Proposition C.5 provides intuition on how MVNNs apply Occam’s razor in the case of not sufficiently informative queries. Proposition C.5 shows that in Example 1, the first VQ MLHCA would ask after the bridge bid would directly push the efficiency from ~55% to 100%. “What is the significance of the 55% efficiency in Theorem 3.2?” For the final version of the paper, we will modify the proof to show that for every $\epsilon>0$, there exist infinitely many instances where any DQ-only algorithm cannot achieve more than $50+\epsilon$% efficiency, no matter how many DQs it generates. Since such a family of instances exists, this constitutes a proof that DQs can be highly inefficient as the sole query type in an auction. Mathematically it would be an interesting open question if 50% can be reduced to 25%. However, from a practical point of view an efficiency of 50% corresponds to welfare losses of billions of dollars. This theoretically emphasizes that DQs massively lose their power in later rounds even if you ask infinitely many DQs, while VQs can always reach 100% efficiency. So, the claim that “DQs are more informative in an auction” is actually a much more nuanced discussion. Finally, thank you so much for the many typos you pointed out. We will correct them all for the final version of the paper.
Summary: The paper introduces MLHCA, a Machine Learning-powered Hybrid Combinatorial Auction that integrates demand queries (DQs) and value queries (VQs) to minimize efficiency loss in iterative combinatorial auctions. The authors provide theoretical insights demonstrating that DQs are most effective in the early auction rounds, while VQs enhance efficiency in later stages by refining allocations. This approach addresses the limitations of previous methods that relied solely on one query type. Empirical results show that MLHCA outperforms existing methods by achieving higher efficiency with fewer queries, significantly reducing bidders’ cognitive load. Claims And Evidence: Yes. All claims made in the paper are theoretical justified and are supported with adequate empirical validation. The paper convincingly demonstrated that the proposed approach is better than existing methods. Methods And Evaluation Criteria: Yes. They are well defined and well justified. Theoretical Claims: The proofs seem logically sound. I however didn't go through the proofs in the Appendix in detail Experimental Designs Or Analyses: The experimental design is solid - with strong baseline comparisons (BOCA, ML-CCA, CCA) and real-world spectrum auction datasets (SATS). The metrics (efficiency and query count) are also appropriately chosen. Supplementary Material: No. Relation To Broader Scientific Literature: This paper extends prior work in ICAs by combining demand queries and value queries (VQs). The problem studied is significant and the proposed achieves better performance than existing SOTA approaches. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: In a way, this paper simply combines two existing approaches, but it does so in a principled way with theoretical justications and empirical validations! Other Comments Or Suggestions: - A discussion on incentive compatibility would be nice. Can the agents misreport their preference in the elicitation stage? How robust is the VQ/ DQ and proposed approach to such strategic misreports! - Have you tried other approaches for enforcing monotonicity in neural networks (such as [this](https://arxiv.org/abs/2307.07512)) Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive and thorough feedback! If you have any more questions, please let us know. “A discussion on incentive compatibility would be nice.” We discuss incentive compatibility in detail in Appendix B, where we try to provide intuition on this topic and prove some theoretical results under some assumptions. In summary, neither the most established mechanism, the CCA, nor any of the ML-powered approaches discussed in the literature are perfectly strategy-proof. However, there are strong theoretical hints suggesting that MLHCA is more robust to strategic misreports than the CCA, currently used in the real world, or most other ML-powered ICAs, such as Weissteiner et al. 2022a, 2023 and Soumalias 2024c. First, MLHCA’s DQ phase is compatible with the activity rules used in the CCA to improve its incentive properties (Appendix B.3). Second, MLHCA can automatically detect when a bidder provides inconsistent reports (Lemma B.6). Third, MLHCA leverages marginal economies in the same way as the VQ-based ML-powered auctions to further align incentives. Fourth, as pointed out in Remark B.7, our experiments show that Assumption 2 in Appendix B.5 has become much more realistic for MLHCA than for any previously proposed mechanism. "Other approaches for enforcing monotonicity in neural networks?" No, we did not compare MVNNs against other architectures. MVNNs have a strong theoretical foundation and have proven their success in learning value functions across multiple market design papers [Weissteiner et al., 2022a,Weissteiner et al., 2023, Soumalias et al. 2024b,c]. Additionally, since MVNNs are the main architecture behind these papers that we compare against, by using the same architecture, we can more easily isolate the effect of our query generation algorithms and the learning advantages of combining both query types. From an auction perspective, finding alternatives to MVNNs probably does not have the highest priority, since MVNNs work very well in practice. Finally, there is also no evidence suggesting that the suggested architecture would perform better, as the authors of that paper did not compare against MVNNs. “In a way, this paper simply combines two existing approaches, but it does so in a principled way with theoretical justifications and empirical validations!” Thank you for this comment—we believe it is a fair and accurate characterization of our work. Indeed, this was precisely the goal of our paper: to combine the two most prominent query types in iterative combinatorial auctions (ICAs) in a principled and effective way. We used our theoretical results to achieve an unprecedented improvement in efficiency while at the same time trying to keep the interaction paradigm as close as possible to the most established interaction paradigm used in practice. Our aim was not to propose a radically new interaction paradigm, but to improve auction performance in ways that are realistically implementable in high-stakes settings like spectrum auctions. To do this, we developed a new theoretical framework for understanding the advantages of combining demand and value queries, both from a combinatorial optimization and a machine learning perspective. For all theoretical results, we were able to experimentally demonstrate their real-world significance. We then leveraged these insights to create MLHCA, an auction that follows the same interaction paradigm as the established CCA, yet achieves substantial perf ormance gains. MLHCA is the most efficient auction in practice, reducing efficiency loss by up to a factor of 10 and surpassing the previous state of the art while using up to 58% fewer queries. While MLHCA’s similarity to the CCA may make it appear less novel at first glance, we believe this is a strength. By identifying the core challenge in value elicitation and addressing it through minimal yet effective changes, we were able to dramatically improve efficiency while remaining fully compatible with established, battle-tested, auction paradigms. Soumalias, E., Zamanlooy, B., Weissteiner, J., and Seuken,S. Machine learning-powered course allocation. EC 2024b Soumalias, E. N., Weissteiner, J., Heiss, J., and Seuken, S. Machine learning-powered combinatorial clock auction. AAAI 2024c Weissteiner, J., Heiss, J., Siems, J., and Seuken, S. Monotone-value neural networks: Exploiting preference monotonicity in combinatorial assignment. IJCAI 2022a Weissteiner, J., Heiss, J., Siems, J., and Seuken, S. Bayesian optimization-based combinatorial assignment. AAAI 2023
null
null
null
null
null
null
SLiM: One-shot Quantization and Sparsity with Low-rank Approximation for LLM Weight Compression
Accept (poster)
Summary: Large language models contain extensive parameter counts leading to significant memory overhead and high inference costs. Pruning and quantization methods solve this, but typically both need retraining on large-scale datasets. One-shot methods can reduce the cost, but jointly pruning and quantizing weights under low-bit scenarios is a challenge. This study proposes SLIM, which combines one-shot quantization, low-rank adapters and sparsity to compress language models for efficient inference. SLIM is validated on OPT and LLaMA-2 families, achieving significant improvements in model efficiency and accuracy across various benchmarks. This study highlights the following: - **SLIM-Quant**: SLIM adopts symmetric weight quantization, and clips the weights before rounding to the nearest integer. The clipping threshold is a hyperparameter, and SLIM-Quant searches this parameter by transferring the problem into a probabilistic formulation of the quantization process. The authors finally use grid search to tackle the optimal $\alpha$. - **SLIM-LoRA**: Naive-LoRA is a straightforward approach that minimizes the total error norm between the original weight matrix and the compressed weight matrix, but it overlooks the importance of each weight value. SLIM-LoRA takes $\operatorname{diag}(\mathbf{x})$ as a saliency function F, and minimizes $F(E_Q)$. The solution to this is to compute the SVD of $F (−(E_Q + E_S))$. - **Low-rank adapter quantization**: The authors further quantize the low-rank adapters to save memory and computation. They use AbsMax group quantization scheme for the adapters and group size is 128. - **Post-compression fine-tuning**: The authors only fine-tune the adapters for efficiency. Straight-through estimator (STE) method is used in back propagation. SLIM is evaluated on the OPT and LLaMA-2 model families. Downstream tasks include MMLU, Piqa, Arc-Easy, Arc-Challenge, WinoGrande and OpenBookQA. The authors also report PPL on WikiText2. Baseline methods are one-shot pruning methods like Wanda and SparseGPT, L$^2$QER, JSQ. Compared with those methods, SLIM show advantage both on structured and unstructured sparsity. The authors also report acceleration results, on RTX 3060 and A100 GPUs. Acceleration ratio is about 2~3x compared with dense full precision models. Claims And Evidence: Most of the claims are clear and convincing. Methods And Evaluation Criteria: Most of the methods and evaluation criteria make sense. Theoretical Claims: As I check, this article is based on experiments and does not require strict theoretical proof. Experimental Designs Or Analyses: Most of the experimental designs are valid. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This study can be seen as an extension or improvement for the N:M post-training pruning. This study is closely related to previous N:M pruning works, such as SparseGPT and Wanda. Essential References Not Discussed: No missing related works. Other Strengths And Weaknesses: Strengths: 1. The paper is well organized and well written. The equations are very clear and easy to understand. The technical content is explained in sufficient detail. Additionally, the use of figures, tables, and examples enhances the clarity of the presentation, ensuring that the key contributions and findings are easy to follow and understand. 2. Combining sparsity, quantization and low-rank adapters is challenging. However, the authors do this successfully. Weaknesses: 1. The improvement does not seem significant. I wonder in practical instruction-following cases, such as MATH or GSM8K, would the advantage still exist. Other Comments Or Suggestions: Typos: 1. Extra space on line 218. Questions For Authors: Questions: 1. Regarding the acceleration in Table 3, is the acceleration mainly from quantization, or is it from sparsity? It would be better to provide an ablation study. 2. Since 'one-shot' is mentioned in the title, what are the results if the post-compression fine-tuning stage is omitted? An ablation study on this would be beneficial. Overall, the paper demonstrates strong clarity and structure. I appreciate the authors' efforts in translating these techniques into practical applications. I recommend a weak accept and suggest that the authors address W1, Q1 and Q2 to provide a more comprehensive perspective. The scores may be revised following the author-reviewer discussion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments. We have provided a detailed reply to address all of your points. # Significance of Accuracy Improvements and MATH Benchmark SLiM achieves up to 5.66% higher average accuracy than leading compression methods across six zero-shot tasks. Per your request, we evaluated LLaMA-2-7B/13B on the MATH benchmark using default LM-Evaluation Harness settings. Note that our vanilla models do not employ CoT[1], which can boost math problem performance. Evaluations on GSM8K are ongoing but slow (over 20 hours for a 13B model). We are also testing quantization on sparse checkpoints and will share results promptly, despite rebuttal constraints. ## LLaMA-2 7B \\begin{array}{|c|c|c|} \\hline \\rowcolor[gray]{0.9} Method & Sparsity Pattern & MATH \\\\ \\hline \\rowcolor[gray]{0.95} Dense & N/A & 0.26 \\\\ \\hline \\rowcolor[gray]{1.0} SparseGPT + OPTQ & 2:4 & 0.04 \\\\ \\rowcolor[gray]{0.95} Wanda + Group AbsMax & 2:4 & 0.00 \\\\ \\rowcolor[gray]{1.0} SLiM-LoRA + SLiM-Quant & 2:4 & 0.64 \\\\ \\hline \\rowcolor[gray]{0.95} SparseGPT + OPTQ & Unstructured & 0.18 \\\\ \\rowcolor[gray]{1.0} Wanda + Group AbsMax & Unstructured & 0.04 \\\\ \\rowcolor[gray]{0.95} SLiM-LoRA + SLiM-Quant & Unstructured & 0.34 \\\\ \\hline \\end{array} ## LLaMA-2 13B \\begin{array}{|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} Method & Sparsity Pattern & MATH \\\\ \\hline \\rowcolor[gray]{0.95} Dense & N/A & 0.60 \\\\ \\hline \\rowcolor[gray]{1.0} SparseGPT + OPTQ & 2:4 & 0.34 \\\\ \\rowcolor[gray]{0.95} Wanda + Group AbsMax & 2:4 & 0.12 \\\\ \\rowcolor[gray]{1.0} SLiM-LoRA + SLiM-Quant & 2:4 & 0.62 \\\\ \\hline \\rowcolor[gray]{0.95} SparseGPT + OPTQ & Unstructured & 0.46 \\\\ \\rowcolor[gray]{1.0} Wanda + Group AbsMax & Unstructured & 0.36 \\\\ \\rowcolor[gray]{0.95} SLiM-LoRA + SLiM-Quant & Unstructured & 0.69 \\\\ \\hline \\end{array} # Speedup Breakdown We’ve included [this graph](bit.ly/3QXcit2), showing the contributions of quantization and sparsity to SLiM’s speedup, evaluated in Quantized-only and Sparse+Quantized settings using Sparse Marling kernels in vLLM. The results indicate that quantization drives most of the speedup, with sparsity contributing less. # Optionality of Post-Compression Fine-Tuning SLiM is a one-shot compression method that improves model accuracy without requiring fine-tuning. The results in `Table-1-Page-7` demonstrate that SLiM surpasses state-of-the-art compression methods without any fine-tuning. While an **optional** PEFT step can further enhance accuracy, it is not integral to SLiM’s core approach. Additional improvements from PEFT are detailed in the ablation studies in `Table-2-Page-7` and `Table-7-Page-14`. [1] Wei, et al. “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”, NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. Most of my concerns are addressed, except for the speedup breakdown graph which does not show on OpenReview. The experiments are comprehensive and ideas are easy but effective. I will recommend a weak accept for your paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer UKqS, We sincerely thank you for your positive feedback on our submission. We apologize for the inconvenience caused by the broken link to the speedup breakdown graph in our previous rebuttal. This issue has been resolved, and the graph is now accessible [here](https://github.com/anonymous-m13/slim-icml2025/blob/main/rtx3060_speedup.pdf). We are more than happy to address any additional questions or concerns you may have. Best Regards, Authors
Summary: This paper proposes to use low-rank approximation to reduce the compression error for quantization and pruning on LLM. SLIM-Quant minimizes the quantization error by selecting the optimal scaling parameter. Low-rank adapters are applied to compensate the quantization and pruning error. Quantization and fine-tuning on the adapter further improve the SLIM. The results show the model performance improvement for 4 bits+50% sparsity compressed model. Claims And Evidence: Yes. It claims improvement on model accuracy and the speedup on RTX3060 and A100 GPUs. The results show the improvement on different models and tasks, which verify the claim. Methods And Evaluation Criteria: Yes. The proposed methods can effectively improve the quantized and pruned model performance by using low-rank adapters. But the evaluation for speedup performance should be compared with other quantization and pruning tasks besides FP16 model. Theoretical Claims: Yes. The SILM-Quant and SILM-LoRA algorithms were checked. Experimental Designs Or Analyses: I keep up with the literature in this area. Supplementary Material: Yes. All of it. Relation To Broader Scientific Literature: LLM compression and combination of different compression works is efficient way for memory problem. Related works are referenced in the paper. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: - The proposed methods have good performance on pruned and quantized model. - The Additional Experiments are comprehensive. Weakness: - The speedup for LLM mainly benefit from the quantization, pruning architecture and the hardware. So, the speedup performance comparison is not fair for the proposed methods. It should be compared with structure pruned model and quantized model. The SLIM-LoRA may reduce the inference performance because of additional Low-Rank operation. - Although the performance is good than other quantized and pruned method, it still has much loss compared with dense model. For Llama2-13B, the SLIM-LoRA compressed model cause 5.86% accuracy loss for zero-shot tasks. - The speedup for quantization and pruning should be evaluated. The performance improvement breakdown can help better understanding on those different compression methods for LLM. Other Comments Or Suggestions: N/A Questions For Authors: - Why do quantization first in the paper? How about pruning + quantization model? - Is fine-tuning the key process for enhancing model performance? - Why the quantized model with low rank can perform better than dense model in Table 6? Do the quantized models always show better accuracy across different tasks? - It seems that unstructured pruning always performs better than 2:4 pruning in Table 7, but 2:4 pruned model has faster inference speed. Can you provide some insights on accuracy vs. inference speed of these different pruning methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback. Below, we provide answers to all the points raised. # Speedup Comparison without LoRA As requested, we have added two tables showing the layer-wise speedups of compressed models, with and without low-rank adapters, on an RTX-3060 GPU. While low-rank adapters slightly reduce speedup, the impact is minimal due to their low overhead. Importantly, SLiM-LoRA enables a flexible trade-off between speedup and model accuracy. \\begin{array}{|c|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} Model Size & Batch Size & LoRA Type & Self-Attention & Up-Projection & Down Projection \\\\ \\hline \\rowcolor[gray]{1.0} 13B & 16 & No LoRA & 3.89 & 3.86 & 3.86 \\\\ \\rowcolor[gray]{0.95} 13B & 16 & FP16 & 2.18 & 2.53 & 2.60 \\\\ \\rowcolor[gray]{1.0} 13B & 16 & INT4 & 1.28 & 3.24 & 3.17 \\\\ \\hline \\rowcolor[gray]{0.95} 13B & 32 & No LoRA & 2.78 & 3.14 & 3.50 \\\\ \\rowcolor[gray]{1.0} 13B & 32 & FP16 & 2.23 & 2.68 & 2.91 \\\\ \\rowcolor[gray]{0.95} 13B & 32 & INT4 & 1.43 & 2.96 & 3.20 \\\\ \\hline \\rowcolor[gray]{1.0} 13B & 64 & No LoRA & 1.46 & 1.88 & 1.98 \\\\ \\rowcolor[gray]{0.95} 13B & 64 & FP16 & 1.38 & 1.78 & 1.67 \\\\ \\rowcolor[gray]{1.0} 13B & 64 & INT4 & 1.21 & 1.69 & 1.65 \\\\ \\hline \\end{array} \\begin{array}{|c|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} Model Size & Batch Size & LoRA Type & Self-Attention & Up-Projection & Down Projection \\\\ \\hline \\rowcolor[gray]{0.95} 70B & 16 & No LoRA & 3.63 & 4.22 & 4.03 \\\\ \\rowcolor[gray]{1.0} 70B & 16 & FP16 & 2.18 & 2.86 & 2.75 \\\\ \\rowcolor[gray]{0.95} 70B & 16 & INT4 & 3.11 & 3.99 & 3.79 \\\\ \\hline \\rowcolor[gray]{1.0} 70B & 32 & No LoRA & 2.91 & 3.52 & 3.55 \\\\ \\rowcolor[gray]{0.95} 70B & 32 & FP16 & 2.00 & 2.63 & 2.67 \\\\ \\rowcolor[gray]{1.0} 70B & 32 & INT4 & 2.75 & 3.19 & 3.39 \\\\ \\hline \\rowcolor[gray]{0.95} 70B & 64 & No LoRA & 1.89 & 1.98 & 2.12 \\\\ \\rowcolor[gray]{1.0} 70B & 64 & FP16 & 1.38 & 1.70 & 1.86 \\\\ \\rowcolor[gray]{0.95} 70B & 64 & INT4 & 1.51 & 1.77 & 1.94 \\\\ \\hline \\end{array} # Accuracy Comparison with Dense Models We believe that **comparing a compressed model with a dense model of the same original parameter count can be misleading**. For gaining a better insight on the accuracy of the models, one needs to compare quality at iso-model size (e.g., effective number of parameters). `Figure-2-Page-8` presents the accuracy of different models vs. their parameter size in GB, allowing for a more fair comparison of the different models. Based on this figure, the **compressed models provide higher accuracies in comparison to dense models of the same parameter size (in GB)**. Additionally, SLiM improves the accuracy of the compressed models by adding negligible additional parameters to them. More discussions about this topic can be found in `Results-Section-Page-7` under the “Comparison of large compressed and small dense models” subsection. # Speedup Breakdown Please refer to our response to [Reviewer UKqS11](ADD-LINK-HERE) regarding an ablation study on the breakdown of the speedups. # Order of Pruning and Quantization Please see [our response to Reviewer SvGK14](https://openreview.net/forum?id=4UfRP8MopP&noteId=5SPU1BHLiL) for a detailed answer regarding this important question. # Effects of Fine-tuning SLiM is designed as a one-shot compression method (similar to SparseGPT) that delivers strong performance without any additional fine-tuning (`Table-1-Page-7`). While an **optional** PEFT step can further enhance accuracy, it is not central to SLiM’s approach. Improvements with optional PEFT are in `Table-2-Page-7` and `Table-7-Page-14`. # Improved Accuracy of Quantized Models With 4-bit quantization-only methods, performance is generally on par with dense models of the same effective size. As shown in `Table-6-Page-13`, some quantized models even slightly outperform their dense counterparts—a trend also observed in LQER [1] and QUIP# [2]. In such close cases, perplexity (`Table-9-Page-17`) can provide a more sensitive comparison metric. # Unstructured vs. 2:4 Sparsity Unstructured sparsity offers greater flexibility and often better accuracy, but is difficult to accelerate on modern GPUs [3, 4]. In contrast, 2:4 semi-structured sparsity is supported by recent GPUs (starting with Ampere architecture), enabling real speedups at the cost of some accuracy degradation. [1] Zhang, et al. “LQER: Low-Rank Quantization Error Reconstruction for LLMs”, ICML 2024 [2] Tseng, et al. “QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks”, ICML 2024 [3] Xia, et al. “Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity”, VLDB 2023 [4] Zheng, et al. “SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute”, OSDI 2022
Summary: The paper introduces SLIM, a one-shot post-training compression framework for large language models. It integrates three components: (1) quantization, (2) pruning for hardware‑friendly sparsity, and (3) a low‑rank adapter to compensate quantization errors. Experimental results show that SLIM improves model accuracy by up to approximately 5.66% and delivers significant GPU inference speedups, making it an effective solution for deploying large models in resource‑constrained environments. **update after rebuttal** I appreciate the authors' clarifications and the additional experimental results. From my initial review through to the current rebuttal phase, my primary concern has consistently been the authors’ decision to minimize quantization error rather than directly targeting output error. In the first-round rebuttal, the authors did not provide a convincing justification for this design choice. In my reply-to-rebuttal comment, I reiterated this concern and requested a more thorough comparison between the two approaches. In their latest response, the authors presented new experimental results aimed at addressing this issue. However, these results indicate that minimizing output error actually yields better accuracy than minimizing weight error. This directly contradicts the rationale for their initially chosen approach and undermines the justification for the proposed methodology. Based on this evidence, I remain unconvinced that the current approach is rationable. I believe the methodology presented in this paper requires substantial revision and clearer justification of key design choices. While the paper has certain merits, I do not think it is ready for publication in its current form. I insist on a weak reject rating for this paper. Claims And Evidence: The paper provides extensive experimental results that support its claims of improved accuracy and significant inference speedups on GPUs. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-aligned with the goal of compressing large language models. However, certain design choices are less convincing. For instance, the decision to minimize quantization error rather than directly targeting activation or output error is not thoroughly justified, and the cascaded application of quantization followed by off-the-shelf pruning raises concerns about potential error amplification. Without sufficient ablation studies or evidence that this ordering is optimal, these aspects leave room for skepticism regarding the overall effectiveness of the approach. Theoretical Claims: I reviewed the derivations for the probabilistic formulation of quantization error minimization (Equations 3–7) as well as the formulation of the saliency-based low-rank adapter (Equations 8–11). While the derivations are logically coherent, they depend on assumptions that are not rigorously justified. For example, assumptions about the underlying weight distribution for numerical integration and the properties (invertibility and additivity) of the proposed saliency function. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound, with extensive evaluations on standard benchmarks and comparisons against SOTA methods on popular model families (LLaMA‑2, OPT). However, there are some minor concerns. For example, while the experiments validate overall accuracy improvements and speedup claims, the lack of detailed ablation studies—particularly regarding the ordering of quantization and pruning—limits our understanding of error propagation in the cascaded approach. Moreover, the trade-off between minimizing quantization error versus feature or activation error is not thoroughly explored, which could affect the validity of the conclusions drawn from the reported experiments. Supplementary Material: I reviewed the appendix, which contains additional experimental results on ablation studies and speedup. Relation To Broader Scientific Literature: The paper’s contributions build directly on established work in model quantization, and pruning. It extends prior methods (post-training quantization approaches and pruning techniques like SparseGPT or Wanda), by proposing SLIM‑Quant, a probabilistic formulation for uniform quantization that reframes error minimization as a convex problem via numerical integration. Additionally, it integrates ideas from low‑rank adaptation research ( like L2QER and LoRA) by introducing a saliency‑based low‑rank adapter (SLIM‑LoRA) that leverages invertible and additive saliency functions to compensate for compression errors . By combining these approaches into a unified, cascaded pipeline, the work aims to deliver efficient compression and hardware‑friendly inference for LLM. Essential References Not Discussed: Some recently published LLM quantization methods have been omitted from the discussion, such as: [1] Egiazarian, V., Panferov, A., Kuznedelev, D., Frantar, E., Babenko, A., & Alistarh, D. (2024). Extreme compression of large language models via additive quantization. in ICML 2024. [2] Tseng, A., Chee, J., Sun, Q., Kuleshov, V., & De Sa, C. (2024). Quip#: Even better llm quantization with hadamard incoherence and lattice codebooks. in ICML 2024. Other Strengths And Weaknesses: **Strengths:** - The paper creatively combines probabilistic quantization, off‑the‑shelf pruning, and a saliency‑based low‑rank adapter into a unified one‑shot compression framework. - By targeting hardware‑friendly sparsity (e.g., 2:4 patterns) and demonstrating significant inference speedups on GPUs, the approach is well-suited for real-world deployment of large language models. **Weaknesses:** - The decision to minimize quantization error instead of directly targeting activation or output error is not thoroughly justified, potentially limiting overall performance. - The cascaded application of quantization followed by off‑the‑shelf pruning risks compounding errors, with no sufficient justification provided for this specific ordering. - There is a lack of joint optimization between quantization and pruning, which may exacerbate error accumulation. - The theoretical derivations rely on assumptions about weight distributions and saliency properties that are not fully validated, potentially affecting the robustness of the method. Other Comments Or Suggestions: None. Questions For Authors: 1. Can you elaborate on why you chose to minimize quantization error rather than targeting activation or output error? Have you performed any experiments to compare the two approaches? Notably, recent methods such as QUIP# and AQLM focus on minimizing activation error for each layer or group. 2. What is the rationale for applying quantization before pruning? Did you consider or experiment with reversing the order, and if so, what were the outcomes? 3. Is it possible to integrate quantization and pruning into a joint optimization framework rather than treating them as separate cascaded steps? If so, what are the trade-offs, and how do you justify not pursuing this integrated approach? The justification for minimizing quantization error instead of directly targeting activation or output error, as well as the possibility of integrating quantization and pruning into a joint optimization framework, are my two major concerns. I may reconsider my rating based on the authors' responses to these questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Minimizing Quantization Error We agree that minimizing the final output error, i.e., $|XW - XW^C|$, is the ideal objective for any compression method. However, directly optimizing this quantity is computationally intractable in general. It is known to be NP-Hard and difficult to scale across layers. In SLiM, we address this challenge by decomposing the problem into two tractable subgoals: (1) minimizing weight quantization error in a close-form, and (2) applying a lightweight, saliency-guided LoRA module to recover the residual output error post quantization. This design choice allows SLiM to scale to large models, while achieving state-of-the-art results across multiple tasks. Our ablation studies (`Table-1,6`) support the effectiveness of this decomposition: using LoRA on top of weight quantization consistently improves output accuracy, validating that our approach approximates the harder output-error objective effectively in practice. # Comparison with QUIP# and AQLM We compare SLiM’s quantization-only (no pruning) zero-shot accuracy on LLaMA-2 models against QUIP# and AQLM. QUIP# does not support sparsity due to its Hadamard transform densifying sparse weights, and AQLM quantization of sparse models takes days (as reported on their code base). We are processing these with AQLM and will share results promptly, given rebuttal constraints. SLiM outperforms other methods in 2 out of 4 tasks.. ## LLaMA-2 7B (4-bit Quantization) \\begin{array}{|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{Arc Challenge} & \\textbf{Arc Easy} & \\textbf{PiQA} & \\textbf{Winogrande} \\\\ \\hline \\rowcolor[gray]{0.95} QUIP-Sharp & 40.5 & \\textbf{69.1} & \\textbf{78.4} & 67.6 \\\\ \\rowcolor[gray]{1.0} AQLM & 40.3 & 68.9 & 77.7 & 67.3 \\\\ \\rowcolor[gray]{0.95} SLiM-Quant + SLiM-LoRA & \\textbf{43.8} & 68.4 & 78.1 & \\textbf{68.4} \\\\ \\hline \rowcolor[gray]{0.9} \\Delta \text{with best alternative method} & +3.3 & -0.7 & -0.3 & +0.8 \\\\ \\hline \\end{array} ## LLaMA-2 13B (4-bit Quantization) \\begin{array}{|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{Arc Challenge} & \\textbf{Arc Easy} & \\textbf{PiQA} & \\textbf{Winogrande} \\\\ \\hline \\rowcolor[gray]{0.95} QUIP-Sharp & 45.5 & \\textbf{73.9} & \\textbf{78.9} & 69.9 \\\\ \\rowcolor[gray]{1.0} AQLM & 43.9 & 72.2 & 78.6 & 70.4 \\\\ \\rowcolor[gray]{0.95} SLiM-Quant + SLiM-LoRA & \\textbf{47.1} & 72.5 & 78.5 & \\textbf{72.5} \\\\ \\hline \rowcolor[gray]{0.9} \\Delta \text{with best alternative method} & +1.6 & -1.4 & -0.4 & +2.1 \\\\ \\hline \\end{array} # Order of Pruning and Quantization We evaluated two SLiM variants: (1) Prune-First, where pruning is applied before quantization, and (2) Quantize-First, where quantization precedes pruning. The table below reports average accuracy across six zero-shot tasks and shows that the compression order has a negligible effect on performance. In both cases, SLiM-LoRA effectively mitigates any induced errors. \\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9}Method&Structure&OPT125M&OPT350M&OPT1.3B&OPT2.7B&OPT6.7B& OPT13B & LLaMA2-7B & LLaMA2-13B \\\\ \\rowcolor[gray]{0.95} Quantize First & 2:4 & 34.62 & \\textbf{34.36} & 40.61 & \\textbf{42.73} & 45.99 & \\textbf{46.09} & 51.15 & \\textbf{54.94} \\\\ \\rowcolor[gray]{1.0} Prune First & 2:4 & \\textbf{34.81} & 33.80 & \\textbf{40.66} & 42.10 & \\textbf{46.02} & 45.15 & \\textbf{51.50} & 54.77 \\\\ \\hline \\rowcolor[gray]{0.95} Quantize First & Unstructured & 35.20 & \\textbf{35.32} & 41.85 & \\textbf{43.48} & 47.08 & \\textbf{47.96} & 54.26 & \\textbf{57.85} \\\\ \\rowcolor[gray]{1.0} Prune First & Unstructured & \\textbf{35.46} & 35.06 & \\textbf{41.49} & 43.16 & \\textbf{47.09} & 46.87 & \\textbf{53.61} & 57.94 \\\\ \\hline \\end{array} # Joint Quantization and Pruning Thank you for the suggestion. With the current formulation of SLiM, joint optimization of pruning and quantization is not feasible. However, because of the SLiM’s unique decomposition of tasks, SLiM-LoRA is compatible with various compression methods and one can readily use it regardless of the compression order. # Assumptions in Theoretical Derivations - **SLiM-Quant:** For SLiM-Quant, we avoid assumptions about the weight matrices’ distribution during optimization, instead using an empirically derived histogram of the weights to guide the process. - **SLiM-LoRA:** For SLiM-LoRA’s saliency function, we assume it is additive and invertible. - **Additivity** holds as we define $F(M)=diag(x)M$, where $x$ is the layer’s average input. For matrices $A$ and $B$, $F(A+B)=diag(x)(A+B)=diag(x)A+diag(x)B=F(A)+F(B)$. - **Invertibility** is ensured by guaranteeing that $diag(x)$ is non-singular, achieved by enforcing positive values in $x$ (see line 5, Algorithm 2, page 5). This allows the inverse to be computed as $F^{-1}(M) = diag(1/x)M$. These properties enable SLiM-LoRA to effectively map recovered saliency to low-rank adapters. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications, but my concerns remain insufficiently addressed. While I appreciate the discussion regarding computational intractability, many existing methods demonstrate that approximate or layer-wise strategies for minimizing output error can be effective. Merely stating that the ideal objective is NP-hard does not justify choosing weight-error minimization by default, particularly when practical approximations exist. Even if SLiM outperforms QUIP# and AQLM in some cases, the argument lacks persuasiveness unless there is a direct comparison between weight-error minimization and output-error minimization under the same framework of SLiM. Furthermore, the response asserts that joint optimization is not feasible within SLIM’s modular design but fails to provide clear experimental or theoretical evidence to confirm that it cannot be implemented effectively or would not yield superior results. These issues continue to raise concerns regarding the overall design and justification of the approach. --- Reply to Comment 1.1.1: Comment: Thank you for reviewing our response and raising additional valuable points. In accordance with your suggestions, we have further extended our method to address your concerns. --- # Output Error Minimization in SLiM-Quant We extended SLiM-Quant by incorporating an output error minimization approach inspired by [AWQ [1]](https://arxiv.org/abs/2306.00978). Similar to AWQ, our revised algorithm applies a scaling strategy to activations, reducing the quantization error of salient weight channels. Specifically, we scale up the weights associated with the most significant channels and correspondingly scale down the related input activations. This approach maintains computational equivalence while effectively lowering the quantization-induced output error. Notably, scaling approximately 1% of the channels does not alter the overall quantization parameters but significantly reduces errors in the critical channels. However, our approach diverges from AWQ by introducing a novel saliency metric that jointly considers both activations and weights. We define the saliency of each channel as the product of the normalized average magnitudes of inputs and weights, expressed as ${|x|} \odot {|w|}$, where ${|x|}$ and ${|w|}$ denote the average magnitudes of activations and weights, respectively. Channels with the highest saliency are scaled by a factor $s > 1$, while their corresponding activations are scaled by $\frac{1}{s}$. Although this method introduces modest computational overhead, that is attributed to on-the-fly adjustments of roughly 1% of activations and resulting irregular memory access patterns, it yields measurable accuracy improvements. These results underscore a clear trade-off between computational complexity and model performance, highlighting the relative strengths of SLiM$^{O}$ (SLiM with output error minimization) over SLiM$^{W}$ (SLiM with weight error minimization). ## Average Accuracy Over 6 Zero-shot Tasks ### 2:4 Sparsity \\begin{array}{|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{LLaMA-2-7B} & \\textbf{LLaMA-2-13B} \\\\ \\hline \\rowcolor[gray]{0.95} SLiM^{{W}} & 51.15 & 54.94 \\\\ \\rowcolor[gray]{1.0} SLiM^{{O}} & 51.22 & 55.05 \\\\ \\hline \\end{array} ## Unstructured Sparsity \\begin{array}{|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{LLaMA-2-7B} & \\textbf{LLaMA-2-13B} \\\\ \\hline \\rowcolor[gray]{0.95} SLiM^{{W}} & 54.26 & 57.85 \\\\ \\rowcolor[gray]{1.0} SLiM^{{O}} & 54.46 & 57.97 \\\\ \\hline \\end{array} ## Perplexity on WikiText-2 ### 2:4 Sparsity \\begin{array}{|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{LLaMA-2-7B} & \\textbf{LLaMA-2-13B} \\\\ \\hline \\rowcolor[gray]{0.95} SLiM^{{W}} & 7.56 & 6.50 \\\\ \\rowcolor[gray]{1.0} SLiM^{{O}} & 7.35 & 6.38 \\\\ \\hline \\end{array} ## Unstructured Sparsity \\begin{array}{|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{LLaMA-2-7B} & \\textbf{LLaMA-2-13B} \\\\ \\hline \\rowcolor[gray]{0.95} SLiM^{{W}} & 6.16 & 5.36 \\\\ \\rowcolor[gray]{1.0} SLiM^{{O}} & 6.06 & 5.28 \\\\ \\hline \\end{array} --- # Joint Pruning and Quantization The current modular design of SLiM does not support direct joint pruning and quantization. However, as noted in our rebuttal, our proposed saliency-based LoRA method (SLiM-LoRA) is compatible with existing joint pruning and quantization approaches. To demonstrate this compatibility, we integrated SLiM-LoRA with JSQ [2], a representative joining pruning and quantization method. The table below reports the average accuracy across six zero-shot tasks for different models: \\begin{array}{|c|c|c|c|c|} \\hline \\rowcolor[gray]{0.9} \\textbf{Method} & \\textbf{LoRA} & \\textbf{Structure} & \\textbf{LLaMA-2-7B} & \\textbf{LLaMA-2-13B} \\\\ \\hline \\rowcolor[gray]{0.95} \\text{JSQ (4-bit)} & \\text{N/A} & 2{:}4 & 45.34 & 49.45 \\\\ \\rowcolor[gray]{1.0} \\text{JSQ (4-bit)} & \\text{SLiM-LoRA} & 2{:}4 & 46.14 & 50.19 \\\\ \\rowcolor[gray]{0.95} \\text{JSQ (4-bit)} & \\text{N/A} & \\text{Unstructured} & 52.08 & 56.20 \\\\ \\rowcolor[gray]{1.0} \\text{JSQ (4-bit)} & \\text{SLiM-LoRA} & \\text{Unstructured} & 52.37 & 56.72 \\\\ \\hline \\end{array} These results demonstrate that applying SLiM-LoRA enhances the accuracy of models utilizing joint pruning and quantization. Please note that JSQ, even when augmented with SLiM-LoRA, does not outperform SLiM-Quant combined with SLiM-LoRA. This discrepancy arises because JSQ was originally designed for 8-bit weight quantization, whereas SLiM-Quant is specifically optimized for 4-bit weight quantization. To facilitate a fair comparison, we extended JSQ to support 4-bit weight quantization (see `Section-4-Page-6`, 'Baselines' for detailed explanations). [1] Lin, et al. “AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration”, MLSys 2024 [2] Yu, et al. “JSQ: Compressing Large Language Models by Joint Sparsification and Quantization”, ICML 2024
null
null
null
null
null
null
null
null
Banyan: Improved Representation Learning with Explicit Structure
Accept (poster)
Summary: This paper introduces Banyan: a new recursive graph neural network for learning text representations in low-resource languages. This model extends previous work by building nested trees over sequences that share the same tokens. In Banyan the same tokens will have the same tree node, even if they come from different sequences. For scalability reasons, the trees are constructed from a batch of samples rather than from an entire dataset. Embeddings are learned from a simplified message-passing algorithm that traverses the trees both in bottom-up and top-down directions. Having nested trees provides multiple advantages, notably the reduction of duplicated nodes and multiple context representations within the same node. These advantages translate to strong semantic representations in both English (when compared to RoBERTa & GLOVE) and lower-resourced languages (when compared to XLM-R (fine-tuned), Llama 3.1 8B, Mistral Nemo 12B, MiniL12, and Paraphrase XLM-R). ## update after rebuttal Claims And Evidence: yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation benchmarks make sense for the application at hand. Theoretical Claims: I did not check the correctness of any proofs or theoretical claims as I did not find any. Experimental Designs Or Analyses: Experimental designs are sound. I checked their sentence and word level evaluation as well as their ablation study, and retrieval and classification tasks on English. I also checked their multi-lingual results. Supplementary Material: I reviewed appendix A: the k and u balance ablation study Relation To Broader Scientific Literature: The contribution of this paper is related to the STRAE model (Opper et al., 2023) published at EMNLP in 2023. This previous work is a similar, tree-based, sentence-level embedding method, that models compositional semantics with minimal data and model size requirements. Banyan differentiates itself by introducing entangled trees, which provides better performance and resource efficiency. Essential References Not Discussed: No essential reference missing to the best of my knowledge Other Strengths And Weaknesses: ### Strengths This paper introduces a novel recursive model that learns textual representations and its learning mechanism. The proposed architecture is novel and seems promising as it yields good results when compared to other more classical methods. In addition, the proposed method is very efficient: it requires very little training and has only 14 non-embedding parameters. ### Weaknesses The task used to evaluate models is not immediately clear from the abstract or the introduction. While Semantic Textual Similarity - STS is mentioned in the introduction, it is not well explained what the objective is. The paper could gain clarity by explaining briefly what the goal is: having cosine similarities that map sentence pairs in a similar way to human judgements. Table 3 shows results for retrieval and classification tasks from the beir and glue benchmarks, but very little information is given regarding the experimental setup and the result analysis. The paper would be stronger with detailed experimental design choices and result analysis on these benchmarks. Other Comments Or Suggestions: In table 3, the last two columns should also bold the best numbers (Banyan @ 77.2 accuracy for sst-2 and Glove & RoBERTa @ 81 for MRPC) Questions For Authors: No additional questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and feedback regarding the paper! The overall logic for our experiments is as follows: We care about whether we can create a resource efficient model by applying an inductive bias. This can be very useful for low resource languages. However, such languages do not have many high quality labelled datasets. The SemEval STS benchmark is the only wide ranging benchmark that we know off. What we want to do is take a high resource language (i.e. English) with a lot of labelled datasets and use that to establish whether we can learn high quality embeddings. STS provides one view of that. However, we should also make certain that Banyan succeeds at other tasks you might care about. The retrieval and classification tasks are there to demonstrate that the embeddings learned by Banyan capture multifaceted aspects of semantics. More concretely: Quora: This is about matching questions to answers. Crucial for applications like RAG and lets us see whether our embeddings capture the response relation. Arguana: This requires matching arguments to counter arguments. It lets us see whether our semantic space captures the notion of dialectical opposition. SST: Sentiment classification - does our representation space capture semantic polarity. MRPC: Paraphrase detection: does our representation space capture semantic equivalence. Moreover, these tasks utilise measures other than correlation, which provides a broader basis for measuring the success of embeddings. By demonstrating broad spectrum capabilities in English, and then showing similar trends but under a more limited evaluation in the low resource languages, we aim to show that Banyan a) learns effective representations b) can therefore provide a solution in case where other methods that require scale are inadequate. You are right that this could be better clarified and we will make sure to use the extra page if accepted to do so! Let us know if you have any further questions, and we look forward to engaging with you during the discussion period.
Summary: This paper studies the problem of learning semantic representations for language in low-resource settings. While word embeddings can be learned with little data, they are non-contextual; on the other hand, transformers can produce contextual embeddings but are data-hungry. In this work, the authors build on an existing architecture, that of Self-StrAE, and propose Banyan. The two main changes to Self-StrAE are that (1) in the downward pass, embeddings for spans with the same sequence of tokens are averaged following an entangled graph structure, and (2) neural models for embedding combination and decombination are replaced with simple diagonalized functions. The resulting model is very efficient in that training is very quick and inference can be done on CPU. The authors evaluate the model on various word-level and sentence-level tasks, first on English, and then on various low-resource languages, comparing it to word embeddings, Self-StrAE, and various transformer baselines trained on much more data. Banyan outperforms all baselines in most settings. Finally, they perform ablations and show that each proposed architectural change is beneficial. Claims And Evidence: - The main claim is that Banyan outperforms various baselines (word embeddings, Self-StrAE, and transformers) in representation learning for low-resource settings. This claim is well-supported by evidence, and the authors perform thorough evaluation on a very wide range of tasks across many languages. - The other main claim in the paper is that the proposed modifications to Self-StrAE are beneficial (entangled graphs, diagonalized functions, replacing contrastive loss with cross entropy). This claim is also well-supported by thorough ablations. - The main limitation of the experiments is that the non-neural baselines are relatively weak. In particular, there are many well-known sentence embedding approaches that are not cited or compared to (please see "Essential references not discussed"). The lack of such baselines undermines the claim that Banyan represents a breakthrough for producing representations in low-resource settings. Methods And Evaluation Criteria: - The method makes sense and the modifications to Self-StrAE are well-motivated. The efficiency of the method also makes it well-suited for low-resource settings. - The benchmarks chosen are also reasonable in that they study (1) how well the representations match human judgments of semantic similarity, (2) how well the representations perform in retrieval, and (3) how well the representations perform for classification. Banyan performs well for (1) and (2) and is slightly worse than baselines for (3). Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design (English experiments, multilingual evaluation, ablations) all seem sound. The main limitation of the results is the weak non-neural baseline (in particular, the lack of sentence embedding methods), as discussed below. The paper also lacks any non-neural baselines in the multilingual evaluations (Table 4), omitting the word embedding baseline present in the English experiments. Supplementary Material: Yes, I reviewed the appendix. Relation To Broader Scientific Literature: The paper proposes a new architecture for structured representation learning which is novel, and it does a good job of positioning and motivating the proposed changes with respect to past work (Self-StrAE). However, it is somewhat unclear how this method relates to simpler existing approaches for sentence embeddings (please see the next section). Essential References Not Discussed: The main set of missing references (and baselines) is the large body of past work on sentence embeddings. While the method proposed in this paper is novel with respect to these past works in that it induces structure, it still competes with these sentence embedding methods as a way to produce sentence representations in low-resource settings. The following are a few methods that are most related to their setting: - [Arora, Liang, Ma (2017)] (https://oar.princeton.edu/bitstream/88435/pr1rk2k/1/BaselineSentenceEmbedding.pdf), and its extension [Ethayarajh (2018)] (https://aclanthology.org/W18-3012.pdf): at a high level, these methods take a weighted average of the word embeddings and apply an SVD - [Pagliardini, Gupta, Jaggi (2017)] (https://arxiv.org/pdf/1703.02507): this method (sent2vec) extends the word2vec objective to sentences - [Ruckle et al. (2018)] (https://arxiv.org/pdf/1803.01400): they show that averaging embeddings with a power mean (e.g. max pooling) and doing concatenation performs better than just taking the average. In particular, the STS numbers reported in these papers are much higher than those of the GloVe baseline presented in this work. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - Overall, the paper is very well-written, but section 5.4 is slightly harder to follow than the rest of the paper (but it is still clear enough). - In my first pass through the paper, I didn't realize that Banyan used CE loss instead of contrastive loss. Maybe that is worth mentioning at the end of Section 4. - In Section 4, I think it's also worth briefly describing how K and U relate to word embedding dimension, and what it means to have independent channels. Typos: - 163: ask -> asks - 383: lightweigh -> lightweight - 426: elegant. -> elegant, Questions For Authors: - I am a bit confused that switching from C to CE loss on standard trees is better (Table 5), given that the Strae paper found the opposite conclusion. My guess is that C > CE for supervised StrAE, but CE > C for Self-StrAE, which makes intuitive sense because supervising intermediate nodes directly can be brittle if the trees are self-induced and slightly wrong. Is this the correct intuition? - I am curious about the induced trees, whether they seem qualitatively reasonable, and whether they match human-designed trees (e.g. on the Penn Treebank). In particular, if the method does in fact recover reasonable trees, that would set it apart from previous sentence embedding methods. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your extensive review and extremely helpful feedback! Other sentence embedding baselines: We appreciate you providing the missing references and will be sure to include them in the paper. Our key focus is to assess whether we can create an efficient method for learning representations for use with low resource languages. [1] Baseline Sentence Embeddings and [2] Unsupervised Random Walk both initialise their embeddings from pre-trained GloVe vectors trained on 840 billion tokens of text. This does naturally lead to better performance, but is only possible for super high resource languages like English, and not the low resource settings that we are primarily concerned with. 
Power mean combination [4] also use pre-trained embeddings. Further the difference in performance [4] reports on sst is negligible. When we attempted to replicate their method we also found no meaningful change.  Sent2Vec [3] reports several pretraining settings, the smallest of which is 900 million tokens with 700D vectors, which again is beyond low resource scale. However, the method is strong and certainly seems worth comparing to. We used the official implementation and followed their hyperparameter recommendations to train our own version. We used our English Wikipedia subsample and at 256D for parity. Results are as follows: | Model | STS-12 | STS-13 | STS-14 | STS-15 | STS-16 | STS-B | SICK | SemRel | Score | |----------|--------|--------|--------|--------|--------|-------|-------|--------|-------| | Banyan | 51.2 +- 0.007 | 69.1 +- 0.002 | 63.3 +- 0.004 | 73.2 +- 0.002 | 66.6 +- 0.002 | 61.5 +- 0.002 | 55.5 +- 0.003 | 61.6 +- 0.002 | 62.7 +- 0.001 | | Sent2Vec | 38.14 +- 0.29 | 51.37 +- 0.48 | 48.64 +- 0.09 | 67.28 +- 0.023 | 56.26 +- 0.06 | 53.39 +- 0.11 | 59.67 +- 0.02 | 51.47 +- 0.03 | 53.28 +- 0.11 | | Model | Q N@1 | Q N@10 | Q R@1 | Q R@10 | A N@1 | A N@10 | A R@1 | A R@10 | SST | MRPC | |----------|-------|--------|-------|--------|-------|--------|-------|--------|-------|------| | Banyan | 57.83 +- 0.04 | 65.78 +- 0.05 | 50.19 +- 0.08 | 75.80 +- 0.18 | 13.21 +- 0.25 | 29.28 +- 0.11 | 27.41 +- 0.68 | 49.60 +- 0.52 | 79.51 +- 0.16 | 77.2 +- 0.27 | | Sent2Vec | 36.12 +- 0.21 | 43.26 +- 0.15 | 31.33 +- 0.21 | 52.38 +- 0.05 | 9.6 +- 0.31 | 23.24 +- 0.15 | 9.6 +- 0.31 | 39.73 +- 0.89 | 76.53 +- 0.98 | 81 +- 0.0 | | Model | Simlex | WordSim S | WordSim R | Score | |----------|--------|-----------|-----------|--------| | Banyan | 16.57 +- 0.02 | 63.25 +- 0.03 | 69 +- 0.01 | 49.61 +- 0.02 | | Sent2Vec | 28.88 +- 0.42 | 68.32 +- 1.26 | 54.49 +- 1.51 | 50.56 +- 0.79 | Sent2Vec is stronger than our original WE baselines, though Banyan generally retains an edge.  Multilingual Word Embedding Baseline: We only compare to large pretrained models here because we hope we already prove Banyan’s utility compared to other methods you could easily train from scratch in our initial experiments. Nevertheless we have trained sent2vec for a few of the languages and find similar trends to the results on English: Afrikaans: Banyan: 78.68 +- 0.30 Sent2Vec: 73.36 +- 0.55 Telugu: Banyan: 71.13 +- 0.91  Sent2Vec: 68.58 +- 0.58 Spanish: Banyan: 60.95 +- 0.76  Sent2Vec: 55.15 +- 0.54 Sent2Vec requires a sentence tokenised corpus as input and finding and then running tokenisers for all the languages is proving to be quite time consuming and in some cases challenging. We are however happy to work on filling out a sent2vec baseline for all languages if you strongly feel this would improve the paper. Suggestions: Thank you for pointing these out, all are good improvements and we will edit accordingly! C > CE: It depends what you mean by brittle?  Using the contrastive loss does lead to consistent performance. However, it can lead to certain unfavourable behaviour whereby tokens like ‘the’ are excessively pushed away from all other embeddings which introduces oddities in the structure. However, C is still superior to CE because it adds a whitening effect, which is vital for tasks like STS. To retain whitening while using CE we had to switch to the diagonal functions. These are simple and therefore need easily separable embeddings to still do well on reconstruction.  The Trees:  To an extent they do seem reasonable, although our merge algorithm is context free which is a limitation on the types of structure. It is maybe best to think of them as akin to running BPE all the way till the whole sequence is compressed. Qualitatively we do find that this leads to some reasonable patterns, and consistent behaviours such as segmenting phrases can be observed. We would be happy to include some examples in the appendix in the final version! Thank you for your constructive feedback and time taken with the paper, we look forward to engaging with you during the rebuttal period!  If you feel like our rebuttal has addressed your concerns please consider raising your score. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply! I find the additional experiment convincing and have raised my score. If you have time, I would be curious about the [Ruckle et al. (2018)](https://arxiv.org/pdf/1803.01400) baseline as well because it is quite simple. Instead of taking a simple average over the word embeddings, it takes the power-mean for various values of p (where p=infty corresponds to max pooling), and then concatenates these embeddings together. This simple method also has strong performance for semantic similarity tasks and should be doable in low-resource settings since it only requires word embeddings. --- Reply to Comment 1.1.1: Comment: We ran some evaluations with the best configuration from Ruckle et al. [-inf, 1, inf] and found some mixed results: | Model | STS-12 | STS-13 | STS-14 | STS-15 | STS-16 | STS-B | SICK | SemRel | Score | |---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|---------------|----------------| | GloVe | 39.00 +- 0.57 | 41.61 +- 0.19 | 39.31 +- 0.18 | 51.06 +- 0.35 | 45.14 +- 0.14 | 48.40 +- 0.07 | 52.80 +- 0.04 | 42/37 +- 0.13 | 44.96 +- 0.1 | | + Power Mean | 35.35 +- 0.79 | 42.44 +- 0.83 | 39.72 +- 0.42 | 47.93 +- 1.09 | 43.08 +- 0.76 | 50.73 +- 0.03 | 51.17 +- 0.06 | 38.81 +- 0.25 | 43.65 +- 0.65 | It seems helpful in some cases but quite detrimental in others, overall leading to a slightly worse score... We will keep experimenting to see if it can be improved, but overall we don't expect the picture to change much based on the results reported by Ruckle et al in Table 2. The performance increase seems very slight - and insufficient to bridge the gap even to sent2vec's performance level. Thanks again for your helpful feedback and constructive review, it provided some really useful context to inform our work!
Summary: The paper proposes BANYAN, a graph-based autoencoder that learns sentence representations by explicitly encoding hierarchical structures. It extends a prior structured model (SELF-STRAE) in two major ways: 1) Entangled Trees: Instead of building a separate tree per sentence, the model merges identical token spans across all sentences (in a batch) into shared nodes. Each node can thus gather training signals from multiple contexts, rather than duplicating the same span across different sentences. This “entangling” reduces memory usage (fewer total nodes) and helps prevent conflicting training signals. 2) Diagonalized Message Passing: The composition and decomposition functions that build and break down embeddings along the tree are replaced with tiny diagonal gating operations. Rather than a full matrix multiply, each dimension is scaled by a learned scalar in [0, 1], allowing the model to control how much information from each child flows upward or downward. This sharply cuts parameter count (just 14 scalars beyond the embeddings) and enforces a rigid “compression order” over the tree. Training uses a cross-entropy reconstruction of the original sentence, ensuring that the root embedding and each intermediate node carry enough information to decode their children. The structure itself is induced via a greedy merging algorithm that repeatedly combines the most similar pairs of embeddings, forming a tree bottom-up. By reusing nodes across sentences, BANYAN effectively averages how each repeated phrase is used in different contexts. BANYAN is tested mainly on semantic textual similarity (STS) tasks at word and sentence level, plus a few classification and retrieval tasks. Key findings are that, despite having very few non-embedding parameters, Banyan matches or beats larger transformer models on many STS benchmarks. It excels especially in low-resource languages, where big multilingual transformers often struggle without abundant data. Claims And Evidence: The paper’s core claim is that explicit structure and minimal gating can yield strong, data-efficient embeddings. Results on various English and multilingual STS sets support this, showing BANYAN is both much smaller than conventional models and still highly competitive. Another claim is that merging identical spans across sentences removes duplication and leverages repeated phrases more effectively. This is demonstrated by the model’s consistent gains over the previous approach that used one tree per sentence, as demonstrated in ablations. Methods And Evaluation Criteria: Banyan is an unsupervised method. The primary measure of success is whether the learned embeddings rank sentence pairs in alignment with human-rated similarity. It also checks retrieval metrics (e.g., NDCG, Recall) and downstream classification performance with a frozen encoder to assess practical utility. Overall, the evaluation makes sense. Theoretical Claims: The paper doesn’t present formal proofs but makes some conceptual claims, for example, that diagonal gating enforces a strict “compression order” over the tree and that averaging entangled nodes (batch-wise) is an unbiased estimator of their global context. These claims appear logically consistent with standard ideas in gating (e.g. decaying influence across multiple composition steps) and stochastic training. No glaring theoretical issues were found. Experimental Designs Or Analyses: The experiments compare BANYAN to relevant baselines (both structured and transformer-based) on unsupervised semantic similarity tasks, plus a few downstream evaluations. The training setup (e.g., matching embedding sizes across models, consistent optimizer settings) is well-justified. The ablation study (entangled vs. standard trees, diagonal vs. full matrices, contrastive vs. cross-entropy) clarifies each modeling choice’s impact. No obvious flaws in methodology were detected. Supplementary Material: No. Relation To Broader Scientific Literature: Banyan follows a line of structured representation learning models but innovates by entangling repeated spans across sentences and using diagonal gating in the composition functions. This ties into prior work on compositionality, unsupervised parsing, and efficient RNN gating, yet it is unique in merging identical spans globally for more efficient, context-rich representations. The discussion references all key works I can think of. Essential References Not Discussed: None I can think of. Other Strengths And Weaknesses: Strengths: * Original: Entangled composition across multiple sentences is novel and addresses false negatives or duplicate merges. * Efficiency: The model relies on only ~14 non-embedding parameters yet can rival large transformers, especially in low-resource contexts. * Clarity: The paper is logically organized and methodologically transparent (ablation results, multilingual experiments). It is really well written. Weaknesses: There are no clear weaknesses. The paper supports all its claims quite well. One could point out that the model goes against the obvious trend of training larger models on more data, and will likely not play a big role in the future. But I wouldn't consider that a real weakness. Other Comments Or Suggestions: line 163: ask -> asks Questions For Authors: I don't have questions that would change my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback, and positive appraisal of our work! We will make sure to fix the typo - thanks for pointing that out! Regarding the trend of bigger models and more data, this is definitely the way things are moving, but it also leaves a lot of languages behind and limits who can participate in research. Efficient solutions can help bridge that gap. --- Rebuttal Comment 1.1: Comment: I completely agree. This is good work and should find its way into the conference.
null
null
null
null
null
null
null
null
Understanding the Logic of Direct Preference Alignment through Logic
Accept (poster)
Summary: This paper proposes a symbolic method to interpret direct preference alignment (DPA) loss functions. Given a DPA loss, the proposed method translates it into a preference structure consisting of three formulae, which can be further used to construct a corresponding semantic loss. The paper further shows how existing DPA variants can be converted to semantic loss forms and explores the relation between these different losses. Finally, the paper proposes a simple study about how the formulation can be used to search for new DPA loss functions. ## update after rebuttal I have read authors' rebuttal and other reviews. I would like to raise my score to weak accept. The paper can benefit from the promised revisions. Claims And Evidence: **Claim: ** **Claim: we show how this formal view of preference learning sheds new light on both the size and structure of the DPA loss landscape, making it possible to rigorously characterize the relationships between recent loss proposals.** This claim is supported by the theoretical analysis of the paper on existing DPA loss and their relations. **Claim: (the finding also makes it possible) to systematically explore the landscape and derive new loss functions from first principles.** This claim is not fully supported. Although the paper proposes a limited study in table 5, the found loss term does not exhibit significant advantage over existing ones. Furthermore, it is unclear to me how the proposed formulation can help explain the performance of different DPA variants, although from figure 4 there are indeed relations and differences in the semantics of these variants. Furthermore, analysis on DPOP in Section F suggests that ad-hoc treatment is needed for DPA loss with "non-standard" forms. This undermines the generalisability of the proposed method as an explanation and exploration tool. Methods And Evaluation Criteria: **Formalising DPA loss terms as semsntic loss** The method is interesting. However, it seems that the method is limited to the "main" component of the loss, disregarding the regularisation terms. Regularisation terms usually play important roles in those preference optimisation losses. So while I believe this method is able to interpret part of the semantics behind the loss, it may also ignore important aspects. **Algorithm 1: translation of loss to logic** The algorithm uses rules in table 6, which covers P1*P2, (1-P1) and P1+P2. I wondered how general this is. For example, how can $\sqrt{P1 \times P2}$ be translated? Theoretical Claims: I did not check all the theoretical results, but the main results in proposition 2, lemma 1, theorem 2, and Table 4, seem correct. Experimental Designs Or Analyses: There are not many experiments. Table 5 presents an empirical study on how a new loss, i.e., $L_{cCPO}$, performs compared with existing loss terms. The performance of the found loss is mediocre and the experiment overall does not convince me that this method is able to help with loss design. Supplementary Material: I briefly checked section D, E, F. Other sections in the appendix were not reviewed. Relation To Broader Scientific Literature: This paper is a symbolic interpretation of preference optimisation losses for value alignment. The topic is important. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengthes: - This paper proposes a formulation of DPA loss in terms of symbolic representation-based semantic loss. From many existing loss terms, the proposed method draws interesting insights about their semantics. Weaknesses. Please check the review above, I repeat some important points below. - The paper lacks empirical or theoretical study on the applicable extend of the proposed method. The translation is based on a set of fixed rules, making it unclear how general these rules are. - Special treatments are required for some DPA loss variants (e.g., DPOP). - The paper does not provide many insights on different performances of loss variants, nor does it provide insights on how to design new DPA loss. I wondered how this method can practically benefit preference optimisation studies. Other Comments Or Suggestions: Please refer to my points above. Questions For Authors: Please refer to my points above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. > The translation is based on a set of fixed rules, making it unclear how general these rules are. See comment below about WMC. > The algorithm uses rules in table 6, which covers P1*P2, (1-P1) and P1+P2. I wondered how general this is. For example, how can sqrt(P1×P2) be translated? Given that our analysis is based on WMC, the losses being decompiled must be expressible as the polynomial class defined by WMC; by construction, we restrict this to the *disjoint multinear polynomials* defined on line 348. Except for DPOP (see discussion below), this naturally captures all known preference losses, including those in Table 2, which we emphasize is a fairly comprehensive set of DPA losses (we updated our appendix to include additional DPO variants that fit this analysis), as well as the common loss functions such as cross-entropy and unlikelihood. The square root clearly does not fit this equation class, so it is not covered by our analysis. Is there a particular loss involving a square root that you have or mind, or some other salient loss that seems out of scope? > The method is limited to the "main" component of the loss, disregarding the regularisation terms. Regularisation terms usually play important roles in those preference optimisation losses. So while I believe this method is able to interpret part of the semantics behind the loss, it may also ignore important aspects. This is an important point, which we will say more about in the updated draft. We note, however, that the decision to avoid regularization terms in our analysis follows other formal studies such as by Tang et al. (2024). We also found cross-entropy terms, even when removed, to have limited impact on empirical performance, which is a finding consistent with [1]. [1] Hanyang Zhao et al. RainboPO: A Unified Framework for Combining Improvements in Preference Optimization > Special treatments are required for some DPA loss variants (e.g., DPOP). This special treatment is due to the compositional constraint from Assumption 1 (inspired from programming semantics), which stipulates that every model prediction in a loss should be directly accounted for in the semantics. *For clarity*: the DPOP loss has terms such as $p_{\theta}(y_{w} \mid x)^{k}$ and $p_{ref}(y_{w} \mid x)^{k}$ with exponents $k$ that make the equation not multilinear when $k > 1$ (without loss of generality we only considered cases where $k=1,2$). Our decision to make them multilinear when $k >1$, by creating new variables $p_{\theta2}(y_{w} \mid x)$ and $p_{ref2}(y_{w} \mid x)$ is based on a common polynomial transformation for making polynomials multilinear, which in the end maintains compositionality. Importantly, as we also detail in Appendix F, the value of $k$ also varies from the instance to instance, so both $p_{\theta2}(y_{w} \mid x)$ and $p_{ref2}(y_{w} \mid x)$ do not have a fix value (i.e., they either can take the value of the other variables or be equal to 1 when $k=1$), which we think justifies treating them as semantically distinct values. We will discuss this more in the draft, including making this more clear when we first reference DPOP around lines 140 (second column). We acknowledge that such exponents could be treated differently, e.g., as parameters that are part of the probability computation for each variable similar to how length normalization is computed, which would sidestep completely these issues involving multilinearity. > Furthermore, it is unclear to me how the proposed formulation can help explain the performance of different DPA variants, although from figure 4 there are indeed relations and differences in the semantics of these variants. As we discussed in our response to JCXg , Proposition 3 establishes that semantic entailment induces certain monotonicity properties w.r.t to the behavior of the compiled losses, and provides us with a notion of the relative constrainedness of losses that is grounded in loss behavior. As discussed in Sec.E.1, the relative constrainedness of a loss is a property that we think has an important impact on its empirical performance, which is explained by our framework and observable when looking at training dynamics as shown in Figure 8. We moved this analysis into the appendix due to space issues. If accepted, we intend to include some of this empirical analysis in the main paper (including some additional experiments that scale the reported ones).
Summary: This paper introduces a novel approach to describing direct preference alignment (DPA) algorithms in terms of propositional logic. By generalizing the notion of semantic loss, the authors attempt to provide a formal framework to characterize differences between DPA variants. Subsequently, the authors leverage the introduced framework to discover improved loss formulations. Experiments on a small (0.5B params) LLM show that the proposed systematic comparison can produce better loss functions for DPA. Claims And Evidence: The central claim of the paper is that the proposed framework allows for novel insights into relationships between existing DPA formulations. Additionally, they propose to leverage that framework to create new loss formulations in a structured manner. Given that this is more of a theoretical paper the authors provide limited experiments. However, they verify their first claim by deriving a loss landscape (or lattice) that provides valuable insights into the interplay between different losses. Given that landscape they also derive new loss formulation that outperform existing formulations. Methods And Evaluation Criteria: - the theoretical results are strong and reasonably evaluated/demonstrated - The subsequent empirical experiments are decent but lack some depth since only one (and very small) LLM was considered Theoretical Claims: The theoratical claims and proofs appear to be sound but my confidence in that part of the review is low. Experimental Designs Or Analyses: As discussed above the theoretical analysis is strong and well rounded. Only one actual experiment is performed, but the setup and choice of evaluation datasets is sound as well. Supplementary Material: No Relation To Broader Scientific Literature: The idea of using logic to describe different DPO algorithms is novel. Although the scheme itself may be largely covered by the original semantic loss paper (i.e. composing logic representations based on NNs outputs), the dedicated arguments and discussions for DPO algorithms would be valuable to the community. Essential References Not Discussed: n/a Other Strengths And Weaknesses: # Weaknesses While the paper provides interesting insights the main impact on the field remains somewhat opaque. For example, - How well can other researchers leverage the proposed framework to identify novel loss formulations beyond the 4 shown in the paper? - Does this approach generalize to other models and larger parameter counts Other Comments Or Suggestions: Related to the point above, Appendix E contains many important results to validate the proposed frameworks’ capability. I would encourage authors to include key insights in the main text as well. Questions For Authors: - Is there any signficant computational overhead with the new loss formulations? (especially cCPO) - Can the introduced semantics over loss functions contribute to efficient search of loss functions (e.g. pruning redundant ones)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback. > How well can other researchers leverage the proposed framework to identify novel loss formulations beyond the 4 shown in the paper? While we reported experiments on 4 losses, we derived many more novel losses, including the 16 single model losses shown in Figure 7 each (excluding 1) with a novel DPO variant. We think that this set exhaustively captures the full set of single model losses that researchers would want to experiment with (it exhaustively shows all definable losses between CEUnl and unCPO), and offers many novel DPO variants that have not yet received empirical verification. We plan to release all the associated code and loss implementations to facilitate further work in this area. Given that our framework now allows one to define losses using a much more expressive logical language, where any valid propositional formula constitutes a valid loss function, we also believe that this makes it possible to more easily devise entirely new classes of loss functions that would otherwise be difficult to derive working solely from the mathematics of DPO (e.g., losses that involve more complex forms of feedback, non-differentiable components). We will include a concrete example of this in the updated draft. >Does this approach generalize to other models and larger parameter counts These approaches can be generalized to any language model of any size or number of parameters; the size of the underlying language model is usually not a critical factor in the loss computation (see below). > Is there any significant computational overhead with the new loss formulations? (especially cCPO) No. The cCPO loss is computed according to the equation given in Appendix D, and does not require any more computation than another of the other losses in that table. As above, computing losses is a relatively easy computation once the basic forward calls have been made to the model to obtain model output probabilities, which are done independently of the loss computation. >Related to the point above, Appendix E contains many important results to validate the proposed frameworks’ capability. Thank you for this suggestion. If accepted we intend to use the additional page to move into the main paper some of the details of the experiments now in the appendix (these were not included due to space limitations), as well as subsequent experiments we did to further verify these findings (see our response to `JCXg`).
Summary: This paper presents a fresh perspective on common loss functions in the rapidly growing direct preference optimization literature. In particular, by translating loss functions into symbolic expressions, the paper offers a principled way to analyze their semantics. This approach makes it easier to understand relationships between different DPO-style algorithms, in particular by revealing a hierarchy based on the amount of constraints imposed by each individual algorithm. Claims And Evidence: The paper makes two claims: 1- the translation into discrete reasoning allows us to better understand the differences between recent proposals in DPO-style algorithms. This is well-supported. 2- the second claim is that this Logic perspective allows us to derive and develop new DPO-style algorithms. While some preliminary signals are presented, this claim is not fully supported, as the evaluation for the new algorithms feel quite limited, on a single benchmark and with limited comparison with existing algorithms and also limited insights found. I think the key question about this paper remains what authors ask on the last page: Can we find empirically improved losses using this method? Methods And Evaluation Criteria: NA Theoretical Claims: NA Experimental Designs Or Analyses: The finding about the large space of algorithms being possible is really exciting. However, while the paper briefly mentions new loss variants, it lacks a strong empirical demonstration of how these losses improve performance over existing algorithms. Supplementary Material: Yes, specifically Appendix E, presenting more insights into the experimental evaluation of the newly proposed algorithms. Relation To Broader Scientific Literature: Instead of focusing on empirical improvements, this work formalizes DPA loss functions using symbolic logic. The work introduces preference structures, which categorize loss functions using logical relationships rather than just their optimization properties. This builds on the trend of moving beyond black-box RLHF and into more interpretative framework. Essential References Not Discussed: NA Other Strengths And Weaknesses: My main concern is regarding the notion of using an absolute value of \epsilon for determining "valid" model predictions. This seems somewhat ambiguous. If a winner in one preference pair is a loser in another, how does the framework handle such inconsistencies? In particular suppose that we have: x, y_1, y_2 x, y_2, y_3 in our dataset, which in fact can be fairly common. What would be the value of \epsilon in this case? Other Comments Or Suggestions: The paper states that the number of definable preference structures is doubly exponential in the number of model predictions. While this highlights a rich space of potential loss functions, it also raises concerns about how efficiently we can explore this space. Having read the paper, it is hard to identify a practical approach for navigating this space. Questions For Authors: The paper identifies that the number of definable structures is in fact doubly exponential. How do we then navigate such a huge space so that we actually get a practical benefit from this insight? You mention that the framework could help derive new DPA losses, but the paper primarily formalizes existing ones, with some limited observations about new algorithms. Have you discovered any novel loss functions that outperform existing approaches, and more importantly, a mechanism to understand which new algorithms could be more promising? Weighted Model Counting (WMC) is a core component of the framework for translating DPA losses into logical expressions. In light of the computational complexity of WMC how does this approach scale? What is the computational bottleneck in algorithm 1? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you, we are excited that you find our work “really exciting”, below we address your comments and concerns. >However, while the paper briefly mentions new loss variants, it lacks a strong empirical demonstration Please see our response below. Much of our experimental results were pushed to the appendix due to space limitations. If accepted we intend to use the extra page to include more empirical results in the main paper, including the extra experiments we mention below. > computational complexity of WMC Indeed, WMC is a hard algorithmic problem, but we note that in our study this complexity is side-stepped since we are working with problems with a small number of variables (2-4). In such cases, one can simply write the WMC formulas in full and simplify them using algebra offline, which is how we obtained the new losses in App.D (we updated the draft to include details of how to compute this using SymPy). Nonetheless, the probabilistic reasoning community has devised various “knowledge compilation” techniques that allow one to scale WMC to much larger problems by compiling these logical representations into tractable circuits, which make many problems feasible in practice (and sometimes in theory). We are confident that if we were to greatly expand the complexity of our programs, such techniques, or other advanced SAT techniques, could be leveraged for doing the involved experiments. >Have you discovered any novel loss functions that outperform existing approaches, and more importantly, a mechanism Yes, we found in the experiments reported in Table 5 that the novel loss, $\ell_{cCPO}$ from Figure 4 outperforms the known loss $\ell_{CPO}$ in a win-rate study adapted from Meng et al 2024. This does show evidence that our theory is able to yield competitive new losses (we have subsequently run these same experiments at larger scales using a >3x larger Smollm-1.7B model and are seeing similar trends; we intend to report these results in the updated draft pending the full results). In terms of mechanisms, we think our experiments do give insight into this, as noted below. – We find that the logical constrainedness of a loss function is an important contributing factor to its empirical success. For example, the highly unconstrained losses in Fig.4 tend to have spurious behavior due to the nature of their semantics and how the underlying constraints can be satisfied (see discussion in E.1), which we can see clearly in the behavior of their win/lose log probabilities and training dynamics shown in Figure 8 (such empirical behavior is something we consistently see across the many experiments we’ve run at different scales). Since DPO was introduced, there has been much puzzlement about the empirical behavior of log probabilities during training, which we think our constraint satisfaction view of the problem can help to elucidate. – We also see in Table 5 that losses perform differently on different subsets of data, which is a trend that we’ve seen persist through our later experiments. We take this as evidence that different tasks and datasets involve different semantics, requiring one to carefully tailor their losses to those semantics. > How do we then navigate such a huge space We tried to address this in “How is the loss space structured” (the right column of line 287). While the space is indeed large, we can see through results such as Proposition 3 that loss behavior is linked in interesting ways to the logical semantics of the losses, which can be exploited for exploration. The strategy we pursued in our case study was the following: start with an empirically successful loss (e.g., CPO), formalize its semantics, then modify its systems to find new losses that are either more constrained (or that entail CPO) or less constrained (that are entailed by CPO) and experiment accordingly. We think strategies like this are useful tools for navigating this space. > My main concern is regarding the notion of using an absolute value The \epsilon notion is only meant to be a tool or heuristic to conceptually think of the distribution as digitized or divided into “valid” and “invalid” outputs; it is not a value that we model explicitly or that plays any role in our formal analysis, and hence we do not make any assumptions about it (e.g., it needn’t be a fixed value). Regarding your example: if we interpret the top symbolic formula in Figure 2 in terms of this digitized distribution, this just says that “whenever we find the loser to be a valid generation (i.e., to be above this \epsilon) we should always find the winner to also be above this line (\epsilon) too”. Note that this semantics, which underlies many DPA approaches, does not rule out the possibility that the loser is a “valid” generation (or also above \epsilon). So for your example of `(x, y_1, y_2) (x, y_2, y_3)` we could set this hypothetical \epsilon to be the probability `y_3` while still satisfying this constraint.
Summary: This paper proposes a novel framework to unify various preference optimization losses as a logical program of response orders. A pairwise preference implies the logic that a rejected response in the policy shall imply that the chosen response is in the policy; A supervised loss implies that the response is favored by the policy. The proposed framework characterizes each preference optimization loss function as such as logical program. On the other hand, they provide an mechanism to remap a logical program to a corresponding loss function. By exploring in the space of logical implications, they essentially explore in the space of optimization losses. In addition to drawing the relationship between losses and logical constraints, their framework provides an interesting roadmap to understand the relationship between different losses. Finally, they provide a proof-of-concept experiment showing that their approach has the potential to discover better preference optimization losses. Claims And Evidence: This paper is mostly a theoretical paper that provides a mathematical framework between preference optimization loss and logical implications. I have followed most of its derivations in the paper, which look good to me. Methods And Evaluation Criteria: The proposed method is a brand new viewpoint of preference optimization losses. The most interesting finding is that most of the preference optimization losses are a combination of "semantic log ratios". Each semantic log ratio corresponds to a certain logic constraint, which are then combined together in the overall loss function. It is a very interesting framework to shed more light on understanding preference optimization algorithms. Beyond its main mathematical contributions, the paper shows a proof-of-concept experiment demonstrating some new losses found in their framework. While the experiment is only minimal, this framework provides the potential to dynamically search for the best loss function, similar to how neural architecture search improves neural network performances. Theoretical Claims: I can follow the logical flows in the paper. But I didn't check the proofs carefully. Experimental Designs Or Analyses: This paper involves a minimal proof-of-concept experiment only. If there are more compelling experiments showing some really strong losses, it will help the paper more. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is a valuable addition to the relevant science literature. In the literature, the preference optimization landscape has been very complicated, due to the combination of many design factors like loss function and reference policy. This paper provides a novel interpretation for those algorithms and a roadmap to connect the scattered dots. Essential References Not Discussed: Not necessarily essential references, but there have been many relevant papers which incorporate "scalar reward signals" in the training objective. With the fine-grained reward information, those methods tend to outperform methods relying on binary preference labels. It would be a strong addition if this framework could include these methods into the roadmap as well. Some relevant papers are follows. [1] RPO. Nemotron-4 340B Technical Report; [2] Distill DPO. Robust Preference Optimization through Reward Model Distillation. [3] InfoNCA. Noise Contrastive Alignment of Language Models with Explicit Rewards. [4] Brain: Bayesian reward-conditioned amor- ´ tized inference for natural language generation from feedback. Other Strengths And Weaknesses: This paper is well written. The logic is easy to follow. Other Comments Or Suggestions: None. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback, we are pleased that you find our approach to be “a valuable addition to the relevant science literature” >many relevant papers which incorporate "scalar reward signals" in the training objective. With the fine-grained reward information, those methods tend to outperform methods relying on binary preference labels. Thank you for these pointers. This does look like an exciting new area where we can try to apply our techniques. While we haven’t digested yet all the details of these papers, it does seem like the kind of distillation objective in Fisch et al. 2025 (Eq. 7) could fit into our framework by adding these additional reward model estimates into our logical formulas as additional predicates (we will think more about this and, if appropriate, mention this direction in an updated draft).
Summary: The paper proposed the decompilation of loss functions such as DPO into symbolic programs. More specifically, the authors present how to derive probabilistic propositional logic programs that can, in turn, be manipulated and compiled into potentially novel and improved losses for preference alignment. Claims And Evidence: The paper claims to (i) introduce formal insights into and characterization of DPA losses through their decompilation into symbolic programs and (ii) practical insights into effective searches for novel DPA losses that improve over the state-of-the-art. Although compelling, the work does not convincingly support the claims, as it lacks clarity in the formal presentation and only provides minor practical insights on feasibility. Methods And Evaluation Criteria: The proposed method seems plausible to improve preference alignment techniques through the interpretation and manipulation of losses on a symbolic level. However, further experiments and discussions are required to evaluate the approach fully. Theoretical Claims: Although the paper contains theorems (one of which is followed by a proof paragraph, the other is not), the paper overall is lacking in clarity, which hinders thorough checks on the contributions' correctness. Experimental Designs Or Analyses: Compared to the paper's claims, I found the experiment design lacking in its scope and discussion. Although the authors state "While these experiments are small scale and limited in scope, they are merely meant to suggest possible uses our framework and open questions." (page 14, line 716), the experiment mostly shows a basic usage example without providing deeper insights into the method or discussion on limitations. Supplementary Material: The paper contains an appendix which I have considered in my review. Relation To Broader Scientific Literature: This work explores the decompilation of losses employed in preference alignment into probabilistic propositional logic. Hence, it relates to literature aiming to guide the training of deep models such as LLMs to align with some additional human preferences. Furthermore, the chosen representation of the decompiled losses relates to probabilistic logic programming (and thereby methods of statistical relational and neuro-symbolic AI), employing weighted model counting over the automatically generated symbolic representations of the loss. Essential References Not Discussed: Essential references have been discussed. Other Strengths And Weaknesses: Strengths: - The paper introduces a compelling method for the decompilation of loss functions into probabilistic propositional logic (and vice versa), opening an interesting avenue for explaining, manipulating, or exploring losses. - The research direction of the paper is important and approached in an interesting and original way. Weaknesses: - The paper contains some confusing wording that needs to be improved. For example, at the end of 'Neuro-symbolic modeling,' the authors first state, "In particular, we focus on approaches based on probabilistic logic.". The next sentence disagrees, "In contrast, we focus on the inverse [...]". - Some figures, e.g., Table 2, are visually challenging to parse, so I recommend reworking them for clarity. Others, like Tables 1, 3, and 4, are less problematic but may similarly be improved. - Some statements made in the paper need clarification. For instance, on page 3, the authors write, "We use θ2 and ref2 to refer to copies of our two models, which is a decision that we address later [...]". At this point in the paper, the statement left me somewhat puzzled in its meaning. - The paper's presentation suffers from how information is distributed throughout the manuscript. For example, on page 5, "Decompilation into semantic loss" is described but references/requires insights from Table 2 (page 3), Section 5.2 (page 7), and Table 6 (page 11) to be understood. - Along the lines of the previous comments, the paper could benefit from some restructuring. As an example, rather than leading into the experiments, Section 6 "Results and Discussion" begins with two more theorems, only one of which is followed by a proof paragraph. Other Comments Or Suggestions: - On page 3, line 146, "No reference" is in bold text, this might be by mistake. - Figure 1 illustrates the core idea of the paper well, but the concepts within the symbolic program may be difficult to parse for a first-time reader on page 1. Perhaps additional annotation or a simplified illustration/example would be more digestible at this point in the paper. - The abstract employs the abbreviation DPO without introducing it beforehand. - The use of bold text for both sectioning and underlining important concepts/keywords may be suboptimal. Questions For Authors: I have no questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the feedback; as noted in our response to the above reviewer, we’ve already taken steps to improve the presentation, which we hope will also address your concerns. We are encouraged that you nonetheless found our approach to be “compelling” and an “interesting avenue to explaining, manipulating, or exploring losses” Below we address your particular comments and questions. > the experiment mostly shows a basic usage example without providing deeper insights into the method or discussion on limitations. Our main contribution is to provide clarity into the ever-growing space of preference losses. The symbolic view of these losses allows us to make the preferences encoded by them explicit, understand relationships between them, and, via counting arguments, enumerate the space of all possible losses with or without a reference model. Moreover, it also provides a conceptual framework for inventing novel kinds of losses. We also note that much of our experimental results were pushed to the Appendix due space issues (see please our response to JCXg). If accepted, we intend to use the additional page to incorporate such results in the main paper. > "In particular, we focus on approaches based on probabilistic logic.". The next sentence disagrees, "In contrast, we focus on the inverse [...]". Sorry for the confusion, we will modify this for clarity. To clarify: the point about this “inverse problem” is that while most people in the neuro-symbolic field have focused on the problem of “compilation” (i.e., translating symbolic formulas into loss functions by interpreting those formulas using probabilistic logic), we focus on the “inverse” and more unique problem of “decompilation” (i.e., deriving symbolic formulas for *existing* loss functions that we also interpret in terms of probabilistic logic). In both cases, probabilistic logic is used as the ingredient that makes communicating between these symbolic forms and losses possible, which is why we say that our approach is “based on probabilistic logic” (we felt that this was important to clarify since other popular modes of translation, such as fuzzy logic, could have been used here as an alternative to the probabilistic approach). >On page 3, line 146, "No reference" is in bold text, this might be by mistake. We will fix this inconsistency. > Figure 1 illustrates the core idea of the paper well, but the concepts within the symbolic program may be difficult to parse for a first-time reader on page 1 As noted in the response to **fMQw**, we completely revamped this figure to make it more clear along the lines that you suggest. >The abstract employs the abbreviation DPO without introducing it beforehand.. Thank you for catching this, we will change DPO to “Direct Preference Optimization”. > Figure 1 illustrates the core idea of the paper well, but the concepts within the symbolic program may be difficult to parse for a first-time reader on page 1. Perhaps additional annotation or a simplified illustration/example would be more digestible at this point in the paper. Please see our response above. > The paper's presentation suffers from how information is distributed throughout the manuscript. For example, on page 5, "Decompilation into semantic loss" is described but references/requires insights from Table 2 (page 3), Section 5.2 (page 7), and Table 6 (page 11) to be understood. We updated our draft to account for these structural issues.
Summary: The work attempts to structure the corpus of existing and discover new optimization losses for direct preference alignment (DPA). To this end, existing DPA methods are unified and cast as a reasoning problem. Namely, each loss corresponds to a set of logic formulas that are optimized via weighted model counting (WMC) under extended preference structures. Logical entailment of the formulas leads to a lattice of losses and possibly new losses where first empirical evaluations are promising. ## update after rebuttal I want to thank the authors for their insightful and comprehensive response. All major concerns have been resolved. However, the clarity is still not quite where it could be, for which reason I maintain my score of 3 (weak accept). Claims And Evidence: The main claims are that most common DPA losses can be cast as optimizing probabilistic logic formulas. This is appropriately substantiated by the presented constructions and proofs (see also "Theoretical Claims"). However, it is unclear which of the many DPA methods are covered by the new perspective introduced in the work. Methods And Evaluation Criteria: The presented empirical findings are but a first glimpse of what can be explored, but sufficient to show that the approach is fundamentally feasible. Theoretical Claims: The paper contains various definitions, theorems, and proofs of them. The definitions are mostly given inline in the text, which is legitimate but slightly hinders clarity. Experimental Designs Or Analyses: The experiments are very minimal, yet sufficient for showing the basic feasibility of the approach. A lot of future work could extend this. Supplementary Material: The supplement consists of the appendix (and no implementation of the experiments). Due to the short time window allotted to me, I could not check it thoroughly. It appears to be well-structured and supports the main paper. Relation To Broader Scientific Literature: I cannot judge the completeness, yet I did not find any missing spots. Essential References Not Discussed: I am not aware of any. Other Strengths And Weaknesses: The presented work uses well-known tools from different sub-disciplines (logic/neuro-symbolic reasoning) to structure the jungle of DPA losses. This very original and promising approach leads to an improved understanding of existing works and possibly new principled discoveries. It is original, yet the writing could be clearer in various places (see, e.g., the next section). The lack of clarity regarding when the method is applicable, and the description and verification of the method itself are the main weaknesses of the work. Other Comments Or Suggestions: Comments on clarity and minor opportunities for improvement: - To me, the start of Sec. 3 is harder to read than necessary. It might help to introduce the role of $\beta$ a bit earlier and move the information from the caption of Tab. 2 ("All losses ...") to the main text to make it self-contained. - Fig. 1: DOP2 could also be named as such on the right-hand side. - Fig. 1 talks about "compilation" and "derivation", while other sections, such as 4.1. (any others), talk about "compilation" and "decompilation" instead. This should be unified. - Fig. 3: The entire figure is a bit unclear, and it personally confused me more than it helped. It might make sense to remove it, as WMC is well-known and can be learned about from other resources. Otherwise, why does $\checkmark$ both correspond to $P$ and tp $\bar{P_f}$, and similarly for the inverse. The meaning of empty cells in the table is also unlcear to me. - Some columns in Tab. 5 are missing highlights. - Using page 16 for a single `)` is avoidable. Questions For Authors: - **Q1**: In Sec. 4, l. 172ff, you state, "We assume that all preference loss functions have an internal logic that can be expressed in the form described above." To what degree is this a limitation of the approach? What happens if that is not the case and we apply the translation regardless? Generally, can you more clearly state the conditions needed to apply the translation from loss to your symbolic representation? - **Q2**: A crucial piece for the understanding of the translation is missing: How is, *intuitively*, the implication of the left-hand side of, say Fig. 1, realized by the loss formula? Some discussion along the lines of "if $\pi_\theta(x, y_w)$ is high, $\sigma$ saturates, and then ...". This would help to understand the compilation/derivation steps and serve as a good sanity check of the method/as a case study. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback, we are encouraged that you found our approach to be “very original and promising”. We have already taken steps to improve the presentation in the places you mention and we will address your particular points below. > To me, the start of Sec. 3 is harder to read than necessary. It might help to introduce the role of β a bit earlier and move the information from the caption of Tab. 2 ("All losses ...") to the main text to make it self-contained. We will address this in the revised draft. > Fig. 1: DOP2 could also be named as such on the right-hand side. We have already modified Figure 1 according to your suggestions and in addition: 1) tried to make the figure easier to read and less blurry and; 2) we expanded the caption to more clearly state the problem and the goals of the paper. > Fig. 1 talks about "compilation" and "derivation" Thank you for noticing this. We will fix it to be more consistent. Also, we restructured part of Section 4 to make clearer the precise relationship between compilation and derivation (i.e., that we treat the former as the inverse of the latter), which we hope better motivates why we immediately start in Sec. 4.1 by talking about the “Compilation to semantic loss”. > Fig. 3: The entire figure is a bit unclear, and it personally confused me more than it helped. I As with Figure 1, we did a complete revamp of this Figure, which we hope addresses your confusion and makes the modifications you suggest. To state the role of Figure 3 more clearly (as we do in our updated draft): Preference structures, via Prop. 2, can be equivalently expressed in terms of *two* Boolean functions, $P_w$ and $P_l$, which correspond to the checkmarks and xmarks in the figure respectively (and are also equivalent to the formula forms in Eq. 4). We think that this view is helpful for: 1) understanding the corresponding model counting problem visually; 2) understanding how our generalized formulation of semantic loss, as captured by the equation in this figure, is general enough to capture things like conditional probabilities (i.e., not satisfying $P_C$, as captured by the white boxes in Fig 3; we will update the arrow from $P_C$ in Fig 3 to point to such white boxes without any checkmark or xmark). > "We assume that all preference loss functions have an internal logic that can be expressed in the form described above." To what degree is this a limitation of the approach? What happens if that is not the case and we apply the translation regardless? To clarify this claim, which is quite general, we assume that all preference losses have *some* internal logic that can be expressed in a discrete form. Whether or not they can all be expressed in the logic we propose is a separate question (e.g., it might be that some preference losses outside of our study require a more complex logical system beyond propositional logic. We will clarify this point, but we think it is an important working assumption to note). For the preference losses under consideration in Table 2 (which cover many of the most popular DPA losses), Thrm. 2, however, does establish that our decompilation procedure and logic correctly captures the logic of these losses in the following sense (which we will make more clear): our decompilation procedure can take any of these losses as input and produce a semantic representation that can be compiled back into exactly and uniquely that loss via our logic. The main condition that needs to be met for the decompilation procedure to be applicable is that the input loss equation is a “disjoint multilinear polynomial” as defined in line 346 (regarding your other point, we see that a non-inline version of this would be helpful here to make this point more clear). Of course, if the input loss does not follow this polynomial form the translation runs the risk of not being correct, but such losses are out of scope for this study and we could imagine making the translation more complex to expand this to other polynomial classes. >How is, intuitively, the implication of the left-hand side of, say Fig. 1, realized by the loss formula? Yes, we agree that a clear intuition here would be helpful. Given the way that our implication construction works (i.e., the construction in the proof of Prop.2 which drives Algorithm 1 and our decompilation procedure), the left side of the implication in semantic representations like those in Figure 1 (or the representations in Table 4) corresponds to the lower/bottom part of the log ratios in the preference losses (Table 2, $\rho_{\theta}^{b}$) and the right side to the upper part of these log ratios ($\rho_{\theta}^{t}$). > The definitions are mostly given inline in the text We will fix this, particularly by considering making some of the core definitions that our formal results rely on (non-inlined) formalized definitions.
null
null
Rectified Robust Policy Optimization for Robust Constrained Reinforcement Learning without Strong Duality
Reject
Summary: This paper studied robust constrained reinforcement learning (RCRL). The paper first proposed a counterexample illustrates that strong duality does not generally hold in RCRL. Therefore, this paper proposes the rectified robust policy optimization algorithm based on a previous algorithm CRPO. The paper provides the sample complexity of converging to an approximately optimal and safe policy. The empirical results justify the algorithm. Claims And Evidence: The explicit challenges of combining Robust RL and constrained RL are not elaborated explicitly on. Although the paper emphasizes that the introduction of uncertainty leads to the failure of the strong duality guarantee, causing the primal-dual methods used in constrained RL to become ineffective, there is no discussion on what the challenges are when primal-only methods (such as CRPO) are combined with Robust RL. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I sketched the proof of the main theorem. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I sketched the proof of the main theorem. Relation To Broader Scientific Literature: The authors list scenarios where machines may experience unforeseen transitions due to equipment aging or failure, and discuss how to ensure robot safety in these worst-case situations. Additionally, the authors could enumerate more application scenarios of Robust Constrained RL (such as autonomous driving, smart healthcare, etc.) to enable readers to fully recognize the practical research value and strong motivation of Robust Constrained RL. Essential References Not Discussed: This paper has a relatively comprehensive discussion on the relevant literature of Robust RL and Constrained RL, and also discusses the work on the study of strong duality under some previous Robust settings. Other Strengths And Weaknesses: Weakness - The algorithm RRPO seems to merely incorporate robust RL methods for value function approximation based on CRPO. The authors do not elaborate on the challenges of elevating from CRPO to RRPO, that is, combining Robust RL and Constrained RL. Therefore, the technical contribution is a bit marginal. - The assumption 4.2 contradicts the Counterexample, where the state $s_1$ is transient and $d(s_1) \to 0$ under any policy. In other words, under Assumption 4.2, could the duality gap be zero and the primal-dual type methods are applicable and have strong guarantee? - The theory (Theorem 4.3) does not seem to suggest any relationship between the sample complexity and the model mismatch. - In the experiments, the paper only used CMDP method as the baseline, it is more convincing to consider robust CMDP methods (e.g., Wang et. al 2022 and Ghosh e.t. 2024). Other Comments Or Suggestions: Please see the weakness. Questions For Authors: Please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive suggestions. Here is our point-to-point responses: ## Relation To Broader Scientific Literature: The reviewer's suggestion on providing more application scenarios is indeed helpful. We will revise the introduction section to include these applications. ## Other Strengths And Weaknesses: 1. ***RRPO is similar to CRPO:*** Given that strong duality may not hold in robust constrained RL, it is natural to explore whether existing algorithms in constrained RL can still be effective. Our contribution is not in proposing an entirely new algorithm but in demonstrating that it is possible to apply an existing classical constrained RL approach to achieve optimal sample complexity without relying on strong duality assumptions. This direction will not be less challenging than proposing a new algorithm for these reasons: (1) Many literatures in constrained RL will rely on the famous paper "Constrained Reinforcement Learning Has Zero Duality Gap". If the strong duality doesn't hold, most of these theoretical analysis will be invalid. (2) Even if the convergence analysis doesn't rely on the zero duality gap, in our empirical examples, the CRPO will still converges to the infeasible result. As the result, we successfully address these two challenges and show that it is not necessary to include additional complicated structures. 2. ***Assumption 4.2 violates the counterexample:*** We would like to clarify that Assumption 4.2 doesn't contradict the counterexample, because * The stationary distribution at $s_1$ is not zero unless the policy $\pi(a_1|s_0)=1$. In this case ($\pi(a_1|s_0)=1$), everytime the agent arrives the state $s_0$, it will be trapped there, which makes the state $s_1$ transient. More explicitly, if we let $\pi_0:= \pi(a_0|s_0)$ and $\pi_1:= \pi(a_1|s_0)$, we can solve the stationary distribution to check the transiency as $\mu = \begin{pmatrix} \dfrac{1}{1 + \pi_1(1-p)} \\ \dfrac{\pi_1(1-p)}{1 + \pi_1(1-p)} \end{pmatrix}$. * In Assumption 4.2, we are using the discounted visitation measure. This definition is slightly different from the stationary distribution. As for all ergodic Markov chain, the stationary distribution is independent with respect to the initial distribution, while the discounted visitation measure will rely on that. Therefore, we comment it under Assumption 4.2 that we can use a more uniform initial distribution to obtain a strictly positive $p\_{\min}$, which is also valid for our counterexample and will not change the absence of strong duality of robust constrained RL. 3. In Theorem 4.3, the dependency on the model mismatch is included in the Policy Evaluation Accuracy assumption. Usually, the larger the model mismatch is, the higher computation is required to achieve the desired accuracy. It could depend on the structure of uncertainty set and the the robust policy evaluation algorithm, which is out of the scope of our study. 4. We appreciate the reveiwer's valuable suggestions. We will add these baselines in our revision.
Summary: This paper studies efficient algorithms for solving constrained RMDPs. In the light of the lack of strong duality for constrained RMDPs, a *primal-only* algorithm, Rectified Robust Policy Optimization (RRPO), is proposed with theoretical convergence rate guarantees. The performance of the algorithm is further justified by numerical simulations. Claims And Evidence: The claims are supported by both theoretical and empirical results, though some results seem suspicious. Methods And Evaluation Criteria: The evaluation method looks reasonable to me. Theoretical Claims: After a careful check of the proofs (esp. those in Section C), I find two major concerns that may jeopardize the validity of the theoretical results. 1. The proofs fail to handle the multiple update timescales well. * The current analysis looks very similar in flavor to that of the standard NPG. However, the NPG update rule at time $t$ with respect to the $i$th value function (Theorem C.5, and when it is cited on line 973 and line 1089) **only holds when $V_i^{\pi_t}$ is chosen to be updated in the $t$th round**. * More specifically, those equations only hold for $t \in \mathcal{N}_i$, where $\mathcal{N}_i := \lbrace t \mid V^{π_t}_i ~\textrm{is sampled to be updated} \rbrace$, but not for any $t \in \mathcal{N}_j$ where $j \neq i$. In this way it is hard to do telescoping sum. * I personally expect to see how the authors can handle the interaction of multiple update timescales when I see the algorithm, but obviously this key technical challenge is "circumvented" using bad notations. 2. Despite the above issue, Lemma C.6 and C.7 also use Lemma C.4 in an incorrect way. * Note that in the statement of Lemma C.4, all initial distributions involved are identically $\mu$. From a very high level, if this lemma is the only tool used, **we should not expect different initial distributions in the results**. * However, this pattern is broken in the proofs of Lemma C.6 (inequality (i) around line 1033) and Lemma C.7 (first inequality around line 1075). * It is possible to show a performance difference lemma that involves different initial distributions, but that lemma probably has a different form, which may significantly change the form of the final results. Specifically, I would expect an additional term characterizing the difference between $\mu$ and $\nu$, the two initial distributions. Experimental Designs Or Analyses: Since this is mostly a theory-oriented paper, the numerical simulations only act as a supporting evidence. For this purpose, the experimental design and results look good to me. Supplementary Material: See the "Theoretical Claims" section. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. I appreciate appendix A.2 that compares this paper against a few very recent work and justifies for its novelty. Weaknesses: 1. The writing of the paper can be further improved. * Algorithm 1 is written in a very ambiguous way. Variables are sometimes marked with time $t$ (e.g., $d_0^{t+1}$), but usually not. The parameter $\theta_t$ is used before explained (in a later sentence in Section 4.2). Some variable are never used (e.g., $\mathcal{N}_0$). * In the last section for numerical examples, most efforts are spent in explaining the experimental setting, leaving only a couple of sentences explaining the observations and their implications. 2. Some assumptions are hidden deeply in the proof details. * It seems from the problem formulation that the algorithm can handle general uncertainty sets, but in fact it only works with those uncertainty sets equipped with efficient Q-function approximators. * In Lemma C.8, $V_i^{\pi}(\nu^*)$ is assumed to be Lipschitz without proofs. 3. Proofs may be invalid due to the issues mentioned above. Other Comments Or Suggestions: 1. There are a few typos and notational inconsistencies. * In equations (6) and (7): the robust value function is denoted by $\widetilde{V}$ instead of $V$. * On page 14, step 3 (below line 750): the partial derivative should be $\frac{\partial \mathcal{L}}{\partial \pi_1}$. * In the appendix: $d_{KL}$ and $D_{KL}$ are interchangeably used. Questions For Authors: See above. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Regarding Major Issues We deeply appreciate the reviewer's careful reading. We absolutely understand that these two concerns are critical and we truly believe they are caused by some misunderstandings from our unclear presentations. Here we make additional discussions to make it more clear and explain why they are correct: 1. **Invalid telescoping:** It is correct that all inequalities (Lemma C.6 and Lemma C.7) only hold when $t\in \mathcal{N}_i:=\\\{ t: V_i^{\pi_t} \text{ is sampled to update}\\\}$. However, we would like to highlight that we can always sum it over $t$ with carefully handling the mismatch in the indices. To make it more clear, we will use $i_t$ to indicate that at the $t$-th step, $V_i^{\pi_t}$ is sampled. Therefore, when we sum $V\_{i\_t}^{\pi^*}(\mu) - V\_{i\_t}^{\pi\_{t+1}}(\mu) \leq a[d(\pi^*, \pi\_t) - d(\pi^*, \pi\_{t+1})]+b$ over $t$ (as an example), it becomes $$\sum\_{t=1}^T \left[ V\_{i\_t}^{\pi^*}(\mu) - V\_{i\_t}^{\pi\_{t+1}}(\mu) \right] \leq a [d(\pi^*, \pi\_1) - d(\pi^*, \pi\_{T})] + Tb.$$ As ***the right-hand side doesn't depend on the index $i$***, this summation is always ***valid***. We spent a lot of effort to make it happen: (1) We use the Lipschitzness in $\pi$ of $V_i^\pi$ to remove the dependence on $i$ in Line 1165, (2) we replace $\hat{Q}\_i^{\pi\_i} - \{Q}\_i^{\pi\_i}$ with the policy evaluation error (Assumption 4.1), and (3) we follow the standard NPG result to decomposite KL divergence to construct the desired crossing structure $a [d(\pi^*, \pi\_1) - d(\pi^*, \pi\_{T})]$ (Lemma C.7). When handling the left-hand side, we set $\delta$ to be "not too small" in Lemma C.8. Under this setting, we ensure that the set $\mathcal{N}_0$ is always non-empty; it means in the whole training process $t=1,2,\dots,T$, there always exists $t$ such that the indice $0$ is sampled. As the result, in Line 1169, we can keep this term $\sum\_{i \in \mathcal{N}_0}$ and lower bound other terms $\sum\_{i \notin \mathcal{N}_0}$ using a trivial bound obtained from Line 1157 - Line 1166. 2. **Misuse Lemma C.4 in Lemma C.6 and Lemma C.7**: We hope clarify that the we are correctly using Lemma C.4. It doesn't require the initial distribution to be fixed as $\mu$; instead, it is an inequality depending on the initial distribution. When we change the initial distribution $\mu$ to another distribution $\nu$, the inequality still reads: $$a E\_{d_\nu \otimes \pi'} [A^\pi(s,a)] \leq V^{\pi'}(\nu) - V^{\pi}(\nu) \leq b E\_{d_\nu \otimes \pi'} [A^\pi(s,a)].$$ The difference includes: (1) The visitation measure is changing from $d_\mu$ to $d_\nu$. (2) The robust value function is changing from $V(\mu)$ to $V(\nu)$. As the result, we are using Lemma C.4 in Lemma C.6 and Lemma C.7 as follows: * In Lemma C.6, we use Lemmca C.4 in Line 957 - Line 958, which honestly follows the structure of Lemma C.4. In Line 1033 mentioned by the reviewer, it follows another result $d\_\nu = \frac{d\_\nu}{\nu} \nu \leq (1-\gamma) \nu$, which applies a trick of the change of measure to change the current measure $d\_\nu$ to another measure $\nu$. The Lemma C.4 is not involved here. * In Lemma C.7, we use Lemma C.4 in Line 1075. Here, we have defined a short-cut notation $\nu^*$ to represent $d\_\mu^{\pi^*, P^*}$ in Line 1073. As the subscript $\mu$ is aligned with the argument in $V^\pi(\mu)$ on the left-hand side, our applying Lemma C.4 here is still correct. ## Regarding Other Issues We thank the reviewer again for providing us with these constructive suggestions. We will revise our manuscript based on these valuable comments and we will clearly state that our method can only handle the uncertainty set where the efficient Q-function approximators are equipped. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' timely response. However, the responses do not fully answer my questions. 1. **Invalid telescoping**. Given the above explanations, I'm still skeptical about the validity of the proof. For a concrete example, in the proof of Lemma C.6, the index $i$ in the statement of the lemma is a running index $\forall i \in [I]$. However, the $i$ on line 974 actually means $i_t$ defined above, since in the $t$th iteration **there is only one $Q_i$ used for NPG update**. Similar issues exist with Lemma C.7. I acknowledge that the issue is (kind of) circumvented on line 1169, but line 1165 still seems problematic to me. * At least, the current system of notations is disastrous and needs to be revised significantly. 2. **Misuse of Lemma C.4**. The explanation regarding the short-hand notation $\nu^*$ is fair, but I'm really surprised to see a shorthand notation used as the initial distribution (appearing on the subscript as $d_{\nu^*}^{\pi}$, which makes me doubt the upper bound on $C_{\textrm{approx}}$. On the other hand, **the issue of misuse still exists on lines 1021-1039**, where $\mu$ and $\nu$ appear in the same equation for no reasons and I do think Lemma C.4 cannot account for it this time. Given the number of issues that exist in the paper, I would suggest a major revision of the paper that could potentially appear in other conferences. I decide to keep my rating for now. --- Reply to Comment 1.1.1: Comment: We appreciate the reivewer's comment and we politely disagree with the reviewer's comment and evaluation on our work. 1. Invalide telescoping. We have answered the Reviewer ywXF's question and it indicates that it is the reviewer's misunderstanding and incorrect evaluation on our derivation steps. We follow the standard notations used in the existing literature [ref1], which is clear in the context. > [ref1] Xu, Tengyu, Yingbin Liang, and Guanghui Lan. "Crpo: A new approach for safe reinforcement learning with convergence guarantee." International Conference on Machine Learning. PMLR, 2021. 2. Misuse of Lemma C.4. We have clearly stated that the short-hand notation $\nu^*$ is defined in Line 1075, which should not be something suprising. Our derivation on lines 1021-1039 directly follows the peer-reviewed literature [ref1]'s Lemma 6, which has been carefully verified and been shown to be correct. > [ref1] Xu, Tengyu, Yingbin Liang, and Guanghui Lan. "Crpo: A new approach for safe reinforcement learning with convergence guarantee." International Conference on Machine Learning. PMLR, 2021. As the result, we hope the reviewer kindly re-evaulate our work.
Summary: This paper first uncovers that strong duality does not generally hold in robust constrained RL via a toy example, which indicates that traditional primal-dual methods may fail to find optimal feasible policies. To address the limitation, it proposes a primal-only algorithm called RRPO, which introduces a 3-stage policy update: Threshold Updates, Constraint Rectification, Objective Rectification. Related convergence analysis is also presented in this paper. Claims And Evidence: No. 1. It is not rational to argue that primal-dual methods will fail in robust constrained RL via a simple toy example. Actually, even if in some constrained RL (without robust) cases with nonconvex objectives and constraints, the strong duality does not hold as well, but primal-dual methods can still obtain not bad performance in these cases. In that case, it is hard to say the primal-only method must surpass the primal-dual methods. 2. The grid-world and mountain car control tasks are too trivial. The proposed method needs to be compared with more methods such as [1,2] in more tasks (e.g., mujoco locomotion tasks) to demonstrate its efficiency. [1] Zhou R, Liu T, Cheng M, et al. Natural actor-critic for robust reinforcement learning with function approximation[J]. Advances in neural information processing systems, 2023, 36: 97-133. [2] Kumar N, Derman E, Geist M, et al. Policy gradient for rectangular robust markov decision processes[J]. Advances in Neural Information Processing Systems, 2023, 36: 59477-59501. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I have checked the correctness of the provided toy example and related proofs of NPG convergence. Experimental Designs Or Analyses: Yes, I have checked the two experiments in this paper, including grid-world and mountain car. Supplementary Material: Yes. I have reviewed the proof and experimental settings in the supplementary materials. Relation To Broader Scientific Literature: 1. The paper uncovers that the strong duality does not hold in the robust constrained RL setting. 2. The paper proposes a primal-only method to address the limitation and presents detailed proof. Essential References Not Discussed: Yes, this paper ignores some literature like [1,2,3] in the field of constrained RL. [1] Wang Y, Zhan S S, Jiao R, et al. Enforcing hard constraints with soft barriers: Safe reinforcement learning in unknown stochastic environments[C]//International Conference on Machine Learning. PMLR, 2023: 36593-36604. [2] Ding S, Wang J, Du Y, et al. Reduced policy optimization for continuous control with hard constraints[J]. Advances in Neural Information Processing Systems, 2023, 36: 38642-38667. [3] Yang L, Ji J, Dai J, et al. Constrained update projection approach to safe policy optimization[J]. Advances in Neural Information Processing Systems, 2022, 35: 9111-9124. Other Strengths And Weaknesses: Strengths: 1. The theoretical analysis of this paper is complete and detailed. Weaknesses: 1. It seems RRPO does not propose anything particularly novel. The core idea--alternating update is similar to CRPO and the technique 2. How does RRPO update its value function? The reviewer thinks, although it may not be the key component of this paper, the authors should illustrate it clearly in their pseudo code. Other Comments Or Suggestions: see Other Strengths And Weaknesses. Questions For Authors: see Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s detailed comments and valuable insights. Below, we provide our point-by-point responses. ## Claims And Evidence: * ***Primal-dual methods will fail in robust constrained RL***: We would like to clarify that we never claimed that primal-dual methods will fail in all cases. Instead, we carefully used the phrase "may fail" to emphasize its potential limitations, motivating the need for a non-primal-dual approach. Specifically, we stated: > "... it indicates that primal-dual methods ***may fail*** to find optimal feasible policies in robust constrained settings." (Page 2). * ***Trivial experiments***: Our experiments were carefully designed to highlight scenarios where a non-robust algorithm may violate constraints in the worst case. We also appreciate the reviewer’s suggestion to include experiments in the MoJoCo environment to demonstrate cases where the robust algorithm outperforms the non-robust one under perturbations. We will incorporate this addition in our revision. ## Essential References Not Discussed: We sincerely appreciate the reviewer for bringing these important references to our attention. We will add these missing references into our work. ## Other Strengths And Weaknesses: * ***RRPO is similar to CRPO***: Given that strong duality may not hold in robust constrained RL, it is natural to explore whether existing primal-only algorithms can still be effective. Our contribution is not in proposing an entirely new algorithm but in demonstrating that this classical constrained RL approach can achieve optimal sample complexity without relying on strong duality assumptions, which is different from existing robust constrained RL literatures which additionally assume the strong duality. * ***Solving the robust value function***: Rather than proposing a specific solution method for the robust value function, we assume it can be computed with desired accuracy. In Appendix C.3, we describe two valid robust policy evaluation approaches. Additionally, a particular update rule for the p-norm $(s,a)$-rectangular set is given in Eq.(14).
null
null
null
null
null
null
null
null
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision Model
Accept (poster)
Summary: The paper focuses on parameter-efficient fine-tuning for 3D vision models and introduces a geometry-aware prompt learning method, named GAPrompt. GAPrompt incorporates three key designs to effectively capture geometric information, including Point Prompt, Point Shift Prompter, and the Prompt Propagation mechanism. Experimental results on the ScanObjectNN and ModelNet40 datasets demonstrate the effectiveness of GAPrompt. Claims And Evidence: This paper emphasizes the importance of geometric information in parameter-efficient fine-tuning of 3D vision models. Experimental results demonstrate the effectiveness of the three proposed geometry-aware designs on benchmark datasets. Methods And Evaluation Criteria: Yes. The three proposed designs focus on capturing the geometric information, and ScanObjectNN and ModelNet40 are standard benchmark datasets to evaluate fine-tuning on 3D vision models. Theoretical Claims: Yes, I have checked the theoretical analysis in Sec. 3.4. Experimental Designs Or Analyses: I have reviewed all the experiments. Ablation studies in Tables 3 and 4 assess the effectiveness of each design. However, the effectiveness of Point Prompt is not explicitly verified in Table 3. Supplementary Material: I have reviewed all the supplementary materials. Relation To Broader Scientific Literature: Prior studies have also acknowledged the importance of 3D-aware parameter-efficient fine-tuning. For example, IDPT'23 generates dynamic prompt tokens for each point cloud instance to capture semantic prior features, while DAPT'24 combines dynamic scale adapters with internal prompts for effective point cloud transfer learning. Essential References Not Discussed: The Positional Prompt Tuning [PPT'22] method is designed for the same task as this work and has released models and code. However, it is neither cited nor compared in the paper. [PPT'22] Positional Prompt Tuning for Efficient 3D Representation Learning. Other Strengths And Weaknesses: ### Strengths - GAPrompt introduces a geometry-aware point cloud prompt for parameter-efficient fine-tuning of 3D vision models, incorporating three designs to effectively capture geometric information. - Experimental comparisons with existing methods, along with ablation studies on individual designs, demonstrate the effectiveness of GAPrompt. - The paper is well-written and easy to follow. ### Weaknesses - Insufficient ablation study on Point Prompt in Table 3. - Missing comparisons with [PPT'22] (Positional Prompt Tuning for Efficient 3D Representation Learning). - Lack of few-shot learning experiments on ModelNet40 and part segmentation experiments on ShapeNetPart. Other Comments Or Suggestions: None. Questions For Authors: ### Major - 1. The working mechanism of Point Prompt in capturing subtle geometric features is not clearly explained. In Table 3, results for "GAPrompt without Point Prompt" should be included. Additionally, besides analyzing the effect of different numbers of Point Prompts, visualizations of the trained Point Prompt should be provided for better understanding. - 2. A comparison with [PPT'22] (Positional Prompt Tuning for Efficient 3D Representation Learning) should be added to Table 1. - 3. It is recommended to include few-shot learning experiments on ModelNet40 and part segmentation experiments on ShapeNetPart. ### Minor - 4. In Fig. 4, the results before and after Self-Attention are inconsistent across different prompt injection options. Further clarification is needed. - 5. The number of Point Prompts varies across datasets in Table 5. A justification for this choice is needed. Additionally, details on how $L_p$ is set and how $p_i^{\prime}$ is initialized should be provided. My evaluation is primarily based on the main weaknesses outlined in the previous section, which correspond to the major problems. Addressing these concerns with substantial evidence and clarifications could potentially influence my rating. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1. working mechanism of Point Prompt As for working mechanism of Point Prompt, it can be analyzed through Equation 18 and 19 in paper. Previous prompting methods operate at **token level**, which corresponds to local patches, failing to adjust exact points within patches. In contrast, our Point Prompt directly prompts within patches, operating at **point-level grains**. When adapting to downstream, it is optimized via task loss to focus on critical points that encode subtle yet essential geometric information. As for visualization, the learned Point Prompts are already present in Figures 5 and 6. However, as there are only **20 prompts** within **2048 points**, may difficult to see. In final version, we will explicitly highlight learned Point Prompts in visualization for better clarity. ### W1. ablation study on Point Prompt in Table 3 We supplement additional ablation results on "GAPrompt without Point Prompt" as shown below. Incorporating the learnable Point Prompt yields an **1.02%** gain in accuracy. |Point Prompt|Point Shift Prompter|Prompt Propagation|Acc.(%)| |:----------:|:------------------:|:----------------:|:------:| |-|-|-|86.10| |√|-|-|87.85| |√|√|-|89.34| |-|√|√|89.65| |√|√|√|90.67| ### Q2,W2. comparison with PPT Thanks for your advice, and we will add it to main table in final version. The primary reason we did not initially include this comparison is that PPT hasn't been officially published and only available on **arXiv**. Even though, we compare on 4 different representative backbones across 4 datasets, and our GAPrompt achieves **14** higher performances in 16 experiments with fewer trainable parameters(**0.6M**), due to our lightweight geometry-aware prompting design. |Method|Ref.|Param.|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet| |:-------:|:--------:|:-----:|:-----------:|:-------:|:-------:|:------:| ||||*Point-MAE*|||| |+PPT|arXiv|1.1|89.33|88.81|84.87|93.7| |+GAPrompt|ThisPaper|**0.6**|**91.91**|**90.19**|**85.57**|**94.2**| ||||*ReCon*|||| |+PPT|arXiv|1.1|**95.01**|**93.28**|89.52|93.8| |+GAPrompt|ThisPaper|**0.6**|94.49|92.60|**89.76**|**94.0**| ||||*PointGPT-L*|||| |+PPT|arXiv|3.6|98.28|96.21|94.10|95.1| |+GAPrompt|ThisPaper|**2.0**|**98.97**|**96.73**|**94.31**|**96.2**| ||||*Point-FEMAE*|||| |+PPT|arXiv|1.1|93.98|92.08|88.79|93.3| |+GAPrompt|ThisPaper|**0.6**|**95.53**|**93.63**|**90.67**|**94.5**| ### W3,Q3. few-shot and part segmentation experiments We supplement few-shot experiments on ModelNet40 and segmentation experiments on ShapeNetPart below. We compare our method with other SOTA works based on pre-trained Point-FEMAE. It can be seen that our method performs well in few-shot setting and can generalize to segmentation tasks well, verifying the efficacy of GAPrompt. *Few-shot on ModelNet40* ||Ref.|5-way||10-way|| |:---------:|:--------:|:----------:|:----------:|:----------:|:----------:| |||10-shot|20-shot|10-shot|20-shot| |+DAPT|CVPR24|96.6±2.1|97.9±2.7|92.1±3.4|94.9±3.3| |+Point-PEFT|AAAI24|96.8±2.6|98.1±2.5|92.4±3.2|95.0±.3.1| |+PPT|arXiv|96.9±1.9|98.0±2.9|91.9±3.6|95.2±3.0| |+GAPrompt|ThisPaper|**97.2**±1.7|**98.4**±2.1|**92.7**±2.9|**95.7**±2.8| Result in *ShapeNetPart* |Method|Ref.|Param.|Cls. mIoU|Ins. mIoU| |:-------:|:--------:|:---------:|:-------:|:-------:| |||*Point-MAE*||| |+DAPT|CVPR24|5.65|84.01|85.7| |+PPT|arXiv|5.62|84.07|85.7| |+GAPrompt|ThisPaper|**5.55**|**84.10**|**85.8**| |||*ReCon*||| |+DAPT|CVPR24|5.65|83.87|85.7| |+PPT|arXiv|5.62|**84.23**|85.6| |+GAPrompt|ThisPaper|**5.55**|83.90|**85.8**| ### Q4. difference before and after Self-Attn in Fig. 4 The difference before and after Self-Attention **arises from distinct inference orders**. This is similar to choosing between **"Attn→FFN"** or **"FFN→Attn"** in Transformer models. The Prompt Propagation mechanism enhances spatial information, further facilitating interactions between prompts and point tokens. When applied **after Self-Attention**, it provides stronger performance improvements. ### Q5a. determination of Point Prompt number The number of Point Prompts is set **proportionally** to the input point cloud resolution, typically **0.5%-1%** of the dataset resolution. For **clean datasets** (e.g., ModelNet40, OBJ_ONLY), **0.5%** yields slightly better results (+0.1%), corresponding to 5 and 10 prompts for 1024 and 2048 points, respectively. While for **noisier datasets** (e.g., OBJ_BG, PB_RS_T50), **1%** provides a small gain (+0.2%), resulting in 20 prompts for 2048 points. ### Q5b. setting of $L_p$ and $p_i^{\prime}$ $L_p$ is set to **10**, as shown in the ablation study below. When $L_p$ is too small, the prompting effect is insufficient, leading to suboptimal performance. Conversely, an excessive $L_p$ does not yield further improvements while incurring additional parameter overhead. As for $p_i^{\prime}$, we use Kaiming Uniform initialization. |$L_p$|0|3|6|10|15|20| |-----|:---:|:---:|:---:|:-------:|:---:|:---:| |Acc.|89.34|89.98|90.34|**90.67**|90.55|90.47| --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their responses, which I have read carefully. In their responses, they added comparisons with PPT. However, some PPT results are inconsistent with those in their paper for the classification task with PointMAE and the few-shot setting with RECON, yet no explanation has been provided. This inconsistency is quite confusing and raises concerns for me, leading me to lean toward a borderline rejection of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer **7QpT** for the additional feedback. We are sorry to see a downgrade from *weak accept* to *weak reject*, and we would like to offer further clarification regarding the two results in question. **Regarding the classification task of PPT on Point-MAE:** The discrepancy arises from the use of **different data augmentation strategies**. Specifically, the PPT [arXiv] paper employs **rotation augmentation**, which is relatively strong. In contrast, our paper and rebuttal adopt the same **scale and translation augmentation** used by Point-MAE and other PEFT methods to ensure fair and consistent comparisons across all baselines. To directly address your concern, we re-evaluated PPT under the **same augmentation setting (scale and translate)** using the official codebase, and reported the updated results in our rebuttal. We would also like to note that PPT is currently an **unpublished preprint on arXiv**, and thus, we were **not obliged to include it as a baseline** in our main paper. Nevertheless, we still conducted additional comparisons in good faith, aiming to meet the expectations raised in your review. We appreciate your attention to experimental rigor. **Regarding the few-shot task:** The confusion here stems from the use of **different backbones**. As stated in our rebuttal, the few-shot results we reported are based on the more recent and stronger backbone **Point-FEMAE [AAAI 2024]**, **not ReCon [ICML 2023]**. This was clarified in our response as: "*We compare our method with other SOTA works based on pre-trained Point-FEMAE.*" We hope these clarifications have addressed your concerns. Please kindly let us know if there are any remaining issues or if further clarification is needed. We value your feedback and are committed to improving the clarity and reproducibility of our work.
Summary: This is a paper on efficient point cloud fine-tuning, where the author added a geometric perception structure to the point cloud embedding part for efficient fine-tuning and achieved good results. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, but this paper does not contain complex proofs. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Previous point cloud PEFT methods, such as DAPT, IDPT, and PPT, mainly rely on prompt learning and adapter for feature extraction. Essential References Not Discussed: Some papers have not been compared and cited. The author should add these discussions, which will not affect the contribution of this paper. Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning Positional Prompt Tuning for Efficient 3D Representation Learning Other Strengths And Weaknesses: This work is solid and worth accepting, which has many ablation studies and visualization. Other Comments Or Suggestions: None Questions For Authors: 1. How about the inference time and flops compared to the other works? 2. How many tokens of the Enhanced Prompt Token are concatenated to the original tokens in sequence? Is there any ablation study? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your appreciation and valuable advice! ### Q1. inference time and FLOPs comparison We test the inference time and FLOPs of our method and other SOTA methods, including IDPT[ICCV23], DAPT[CVPR24] and Point-PEFT[AAAI24]. All experiments are conducted on a RTX 4090. As table shown below, our method attains the fastest inference time with only **11.1** millisecond per sample and fewest FLOPs with **5.0G** MACs among parameter-efficient works. Notably, our method merely introduces **4%** computational overhead but brings over **97%** reduction in trainable parameters. | | *Point-MAE* | +IDPT | +DAPT | +Point-PEFT | +GAPrompt | | :------------------------: | :---------: | :---: | :---: | :---------: | :-------: | | inference time (ms/sample) | 10.2 | 16.2 | 12.0 | 13.5 | **11.1** | | FLOPs (G) | 4.8 | 7.2 | 5.0 | 7.0 | **5.0** | ### Q2. number of the enhanced prompt tokens The number of enhanced prompt tokens in our experiments is set to **10** as a hyperparameter. We provide additional ablation experiment results below and will add it into final version for better interpretability. We find that our method produces peak performance when adopting 10 enhanced prompt tokens. A smaller number leads to suboptimal prompting effects due to insufficient guidance, while an excessive number does not yield further improvements but incurring additional computational cost. | Prompt Token Number | 0 | 3 | 6 | 10 | 15 | 20 | 30 | | ------------------- | :---: | :---: | :---: | :-------: | :---: | :---: | :---: | | Acc. on PB_T50_RS | 89.34 | 89.98 | 90.34 | **90.67** | 90.55 | 90.47 | 90.33 | ### C1. citation to PointGST and PPT Thanks for your reminder and we will cite them in final version. Notably, at the time of our submission and even at present, both works remain as preprints on arXiv and have not been officially published. However, our method still excels both in terms of accuracy and efficiency. We compare with them on four dataset based on four representative backbones, as shown below. GAPrompt achieve **12** SOTA results while PPT and PointGST get 2 SOTA results respectively. And our method has only **0.6M** trainable parameters, attaining highest parameter efficiency, attributed to our geometry-aware point-level prompting design. | Method | Ref. | Param. | OBJ_BG | OBJ_ONLY | PB_T50_RS | ModelNet | | :-------: | :--------: | :-----: | :-----------: | :-------: | :-------: | :------: | | | | | *Point-MAE* | | | | | +PPT | arXiv | 1.1 | 89.33 | 88.81 | 84.87 | 93.7 | | +PointGST | arXiv | 0.6 | 91.74 | 90.19 | 85.29 | 93.5 | | +GAPrompt | This Paper | **0.6** | **91.91** | **90.19** | **85.57** | **94.2** | | | | | *ReCon* | | | | | +PPT | arXiv | 1.1 | **95.01** | **93.28** | 89.52 | 93.8 | | +PointGST | arXiv | 0.6 | 94.49 | 92.94 | 89.49 | 93.6 | | +GAPrompt | This Paper | **0.6** | 94.49 | 92.60 | **89.76** | **94.0** | | | | | *PointGPT-L* | | | | | +PPT | arXiv | 3.6 | 98.28 | 96.21 | 94.10 | 95.1 | | +PointGST | arXiv | 2.4 | 98.97 | **97.59** | **94.83** | 94.8 | | +GAPrompt | This Paper | **2.0** | **98.97** | 96.73 | 94.31 | **96.2** | | | | | *Point-FEMAE* | | | | | +PPT | arXiv | 1.1 | 93.98 | 92.08 | 88.79 | 93.3 | | +PointGST | arXiv | 0.6 | 94.66 | 92.94 | 90.22 | 93.8 | | +GAPrompt | This Paper | **0.6** | **95.53** | **93.63** | **90.67** | **94.5** |
Summary: This paper proposes GAPrompt, a geometry-aware prompt learning method for parameter-efficient fine-tuning (PEFT) of pre-trained 3D vision models. Existing PEFT approaches in 3D vision struggle to capture geometric information from sparse and irregular point clouds. To address this, GAPrompt introduces three key innovations: Point Prompt: Explicitly incorporates learnable point clouds as auxiliary input to enhance geometric awareness. Point Shift Prompter: Dynamically adjusts point cloud positions using global shape features extracted via hierarchical downsampling. Prompt Propagation: Integrates shape information into feature extraction through token replacement and interpolation. Experiments on ScanObjectNN and ModelNet40 show GAPrompt achieves competitive performance with full fine-tuning (e.g., 90.67% vs. 90.22% on PB-T50-RS) while using only 2.19% trainable parameters. Claims And Evidence: I think they are clear. Methods And Evaluation Criteria: They are reasonable for me. Theoretical Claims: I have checked. But I’m not familiar with this topic at all. So I will read the comments from other reviewers carefully. Experimental Designs Or Analyses: Yes, I have checked. They are sound to me. Supplementary Material: Yes, I have reviewed. Relation To Broader Scientific Literature: The key contributions of GAPrompt build upon and extend several critical strands of research in 3D vision, parameter-efficient fine-tuning (PEFT), and geometric deep learning. Essential References Not Discussed: Sorry. I'm not familiar with this topic. Other Strengths And Weaknesses: Strengths: Geometry Integration: First work to explicitly leverage point-level geometric cues for 3D PEFT, addressing a critical gap. Parameter Efficiency: Achieves SOTA performance with <3% tunable parameters, outperforming adapter-based methods. Interpretability: Visualizations validate the role of shape features in guiding attention. Weaknesses: Computational Overhead: Multi-resolution FPS/KNN operations may introduce latency (unquantified in the paper). Task Specificity: Evaluated only on classification; generalization to segmentation/detection remains unproven. Initialization Sensitivity: Uniform point prompt initialization may underperform on non-uniform LiDAR data. Other Comments Or Suggestions: Method Enhancements: Explore sparse point cloud optimizations to reduce FPS/KNN costs. Investigate adaptive weighting between shape features and prompts. Theoretical Analysis: Formalize the robustness of point shifts against adversarial perturbations. Compare initialization strategies (e.g., clustered vs. uniform prompts). Questions For Authors: Computational Cost: Does the Point Shift Prompter’s multi-resolution grouping become a bottleneck for real-time applications (e.g., robotics)? Pretraining Dependency: How does GAPrompt perform with contrastive pre-trained models (e.g., Point-BERT) versus mask-based ones (Point-MAE)? Density Variations: Can the current design handle extremely sparse inputs (e.g., 100 points) without performance degradation? Latency Metrics: What is the actual inference time increase compared to full fine-tuning (e.g., milliseconds per sample)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1,W1. computational cost of multi-resolution grouping Multi-resolution grouping of Point Shift Prompter can hardly become a bottleneck for real-time applications. Although multi-resolution FPS/KNN operation has $O(N^2)$ complexity, its overhead remains minimal compared to $O(N^2)$ attention mechanism and expensive MLPs, because the feature size of FPS/KNN is **3**, far less than the **384**-dimensional model features. We supplement a breakdown of computational costs. As seen, our three modules collectively account for **less than 2%** of total computational cost, with majority stemming from encoder and attention mechanism. ||Point Prompt|Point Shift Prompter|Prompt Propagation|Encoder|Attn. Layers|FFN layers|Downstream head| |---------|:----------:|:------------------:|:----------------:|:-----:|:----------:|:--------:|:-------------:| |**FLOPs**|0.0005G|0.045G|0.20G|2.03G|0.082G×12|0.164G×12|0.001G| |**Ratio**|0.01%|0.9%|0.4%|39.9%|19.3%|39.5%|0.01%| ### Q2. pretraining dependency In addition to backbones already introduced (mask-based Point-MAE and **contrastive** pre-trained **ReCon**), we supplement additional **Point-BERT**. Notably, in Table 1 of the paper, **ReCon** is exactly pre-trained via contrastive learning and our method **surpasses full fine-tuning** with only **1.38%** trainable parameters. Based on Point-BERT, GAPrompt still attains highest performance with lowest trainable parameters, verifying its generalizability to pre-trained backbones. |Method|Ref.|Param.|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet| |:-------:|:--------:|:-----:|:----------:|:-------:|:-------:|:------:| ||||*Point-BERT*|||| |+IDPT|ICCV23|1.7|88.12|88.30|83.69|92.6| |+DAPT|CVPR24|1.1|91.05|89.67|85.43|93.1| |+GAPrompt|ThisPaper|**0.6**|**91.22**|**89.85**|**85.64**|**93.5**| ### Q3. density variations We supplement experiments in extremely sparse inputs condition, at 128 input resolution on ModelNet40. While all methods faces a performance drop, GAPrompt **consistently outperforms** other PEFT methods, demonstrating robustness to varying input densities. ||Resolution|*Point-MAE*|+IDPT|+DAPT|+Point-PEFT|+GAPrompt| |:--------:|:--------:|:---------:|:---:|:---:|:---------:|:-------:| |ModelNet40|128points|86.2%|84.4%|85.2%|85.6%|**86.0%**| ### Q4. latency analysis We measure inference time on ScanObjectNN using an RTX 4090, as shown below. GAPrompt incurs only **0.9ms** additional latency, accounting for less than **9%** of the base 10.2ms required by Point-MAE. Furthermore, our inference time is less than other PEFT methods, benefiting from lightweight point-level prompting design. ||*Point-MAE*|+IDPT|+DAPT|+Point-PEFT|+GAPrompt| |:------------------------:|:---------:|:---:|:---:|:---------:|:-------:| |inferencetime(ms/sample)|10.2|16.2|12.0|13.5|**11.1**| ### W2. task specificity We supplement additional results on part segmentation dataset *ShapeNetPart* and semantic segmentation dataset *S3DIS*. Even though such dense prediction tasks are challenging, our method excels previous SOTA methods including DAPT[CVPR24] and PointPEFT[AAAI24] in both efficiency and efficacy. Results in *ShapeNetPart* |Method|Ref.|Param.|Cls.mIoU|Ins.mIoU| |:---------:|:---------:|:------:|:-------:|:-------:| ||*Point-MAE*|||| |+DAPT|CVPR24|5.65|84.01|85.7| |+Point-PEFT|AAAI24|5.62|83.41|85.4| |+GAPrompt|ThisPaper|**5.55**|**84.10**|**85.8**| ||*ReCon*|||| |+DAPT|CVPR24|5.65|83.87|85.7| |+Point-PEFT|AAAI24|5.62|83.23|85.3| |+GAPrompt|ThisPaper|**5.55**|**83.90**|**85.8**| Results in *S3DIS* |Method|Ref.|Param.|mAcc|mIoU| |:---------:|:---------:|:------:|:------:|:------:| ||*Point-MAE*|||| |+DAPT|CVPR24|5.61|67.2|56.2| |+Point-PEFT|AAAI24|5.58|66.5|56.0| |+GAPrompt|ThisPaper|**5.51**|**68.5**|**58.4**| ||*ReCon*|||| |+DAPT|CVPR24|5.61|66.3|56.3| |+Point-PEFT|AAAI24|5.58|65.8|55.8| |+GAPrompt|ThisPaper|**5.51**|**68.0**|**58.0**| ### W3. initialization sensitivity Our point prompt initialization exhibits robustness on non-uniform LiDAR data. In our Table 1 experiments, the **three ScanObjectNN variants** originate from real-world **LiDAR scans**, which inherently contain **non-uniform distributions** and **background fragments**. Despite this, GAPrompt still achieves SOTA results, even surpassing full fine-tuning, using less than **2%** trainable parameters. ### S1. potential method enhancements Thanks for your insightful suggestions! As classic algorithms with $O(N^2)$ complexity, replacing FPS/KNN with sparse point cloud operations can further enhance efficiency. As for adaptive weighting between shape features and prompts, current hyperparameter setting relies human expertise. We agree adaptive weighting would be a better choice, deserving future exploration. ### S2. theoretical analysis advice Thanks for your valuable advice. We will provide additional discussions on robustness of point shifts against adversarial perturbations and Point Prompt initialization strategies in final version.
Summary: The authors propose a parameter fine-tuning method for point cloud models, called GAPrompt. The main motivation of this method is to inject geometric information into the point cloud model. To achieve this goal, the authors propose three components: point prompts, which are used to increase the number of points; point shift prompter, which are used to extract global features; and prompt propagation mechanisms, which input global information into different blocks of the model. The effectiveness of this method is verified in the point cloud classification task. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: This article lacks some theoretical innovations. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I focused on the experimental data in Figure 7. Relation To Broader Scientific Literature: The author mentioned the most advanced work in the introduction, related work and experimental comparison. However, the author did not mention some of the most advanced work such as PPT[1] and PointGST[2]. [1] Parameter-efficient Prompt Learning for 3D Point Cloud Understanding [2] Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning Essential References Not Discussed: [1] Parameter-efficient Prompt Learning for 3D Point Cloud Understanding [2] Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning Other Strengths And Weaknesses: Strengths: 1、The author proposes GAPrompt, a novel geometry-aware prompt learning method tailored for pre-trained 3D vision models. 2、The author ntroduces three key algorithm designs including Point Prompt, Point Shift Prompter, and the Prompt Propagation mechanism. Weaknesses: 1、Although the proposed method has achieved advanced results, I think its innovation lies in the combination of ideas that have been published so far. Specifically, PPT[1] has proposed to fine-tune the 3D model using the position information of points. At the same time, the method proposed in this paper is very similar to the results of PointGST[2], but in some methods it is still lower than PointGST. [1] Parameter-efficient Prompt Learning for 3D Point Cloud Understanding [2] Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning 2、The experiments conducted by the author only in the point cloud classification task are insufficient and need to be further verified in the point cloud part segmentation or semantic segmentation tasks. 3、I think that the hint increases the number of input points, which is a disguised way of increasing the size of the dataset, which is similar to a kind of data augmentation. I think this is unfair to other methods. 4、In Figure 7, the author should show the case where the point prompt number is zero. Based on Figure 7, I guess the corresponding result should be lower than 89.76%. This is not significantly different from other methods. Does this mean that the method proposed by the author is mainly caused by increasing the number of point cloud inputs? Other Comments Or Suggestions: The text lacks some explanations of data symbols. Especially in Sections 3.2 and 3.3. Questions For Authors: See Weakness. ----------------------------------------------------------------- After reading the author's reply carefully and thinking deeply, I still think that the manuscript lacks innovation, the experimental data is single, and the writing is confusing. I think the manuscript fails to meet the publication requirements of the conference. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ### R1. comparison with PPT and PointGST Although PPT and PointGST are currently available only as preprints on **arXiv** and have not been officially published, we supplement comparison experiments as shown below. The results across four datasets and four representative backbones illustrate that **GAPrompt achieves 12 SOTA results** while PPT and PointGST each attain 2. Furthermore, our approach requires only **0.6M** trainable parameters, the highest parameter efficiency. |Method|Ref.|Param.|OBJ_BG|OBJ_ONLY|PB_T50_RS|ModelNet| |:-------:|:--------:|:-----:|:-----------:|:-------:|:-------:|:------:| ||||*Point-MAE*|||| |+PPT|arXiv|1.1|89.33|88.81|84.87|93.7| |+PointGST|arXiv|0.6|91.74|90.19|85.29|93.5| |+GAPrompt|ThisPaper|**0.6**|**91.91**|**90.19**|**85.57**|**94.2**| ||||*ReCon*|||| |+PPT|arXiv|1.1|**95.01**|**93.28**|89.52|93.8| |+PointGST|arXiv|0.6|94.49|92.94|89.49|93.6| |+GAPrompt|ThisPaper|**0.6**|94.49|92.60|**89.76**|**94.0**| ||||*PointGPT-L*|||| |+PPT|arXiv|3.6|98.28|96.21|94.10|95.1| |+PointGST|arXiv|2.4|98.97|**97.59**|**94.83**|94.8| |+GAPrompt|ThisPaper|**2.0**|**98.97**|96.73|94.31|**96.2**| ||||*Point-FEMAE*|||| |+PPT|arXiv|1.1|93.98|92.08|88.79|93.3| |+PointGST|arXiv|0.6|94.66|92.94|90.22|93.8| |+GAPrompt|ThisPaper|**0.6**|**95.53**|**93.63**|**90.67**|**94.5**| ### W1. innovation and difference against PPT and PointGST Our GAPrompt differs from these methods in two key aspects. 1. **Finer-grained prompting:** PPT operates on **position encodings** of point tokens, as a **coarse-grained** token-level prompting approach. In contrast, GAPrompt introduces **fine-grained point-level** prompting via the Point Prompt and Point Shift Prompter, allowing more precise and adaptive feature modulation at the individual point level. 2. **Interpretability and geometric awareness:** PointGST introduces a **Spectral Adapter**, transforming point tokens from spatial domain to spectral domain, which belongs to **adapter** methods. But GAPrompt belongs to prompt methods, avoiding **obscure spectral domain transformations** and ensuring stronger **interpretability and geometric awareness**. ### W2. part or semantic segmentation tasks We supplement additional segmentation results on both ShapeNetPart and S3DIS datasets. It can be seen that our GAPrompt excels other methods including the arXiv twos. We achieve **six** SOTA metrics across 2 datasets and 2 backbones with highest parameter efficiency, only **0.3M** additional parameters beyond 5.2M of downstream head. Results in *ShapeNetPart* |Method|Ref.|Param.|Cls.mIoU|Ins.mIoU| |:-------:|:---------:|:------:|:-------:|:-------:| ||*Point-MAE*|||| |+DAPT|CVPR24|5.65|84.01|85.7| |+PPT|arXiv|5.62|84.07|85.7| |+PointGST|arXiv|5.59|83.81|85.8| |+GAPrompt|ThisPaper|**5.55**|**84.10**|**85.8**| ||*ReCon*|||| |+DAPT|CVPR24|5.65|83.87|85.7| |+PPT|arXiv|5.62|**84.23**|85.6| |+PointGST|arXiv|5.59|83.98|85.8| |+GAPrompt|ThisPaper|**5.55**|83.90|**85.8**| Results in *S3DIS* |Method|Ref.|Param.|mAcc|mIoU| |:-------:|:---------:|:------:|:------:|:------:| ||*Point-MAE*|||| |+DAPT|CVPR24|5.61|67.2|56.2| |+PPT|arXiv|5.58|67.6|57.9| |+PointGST|arXiv|5.59|68.4|**58.6**| |+GAPrompt|ThisPaper|**5.51**|**68.5**|58.4| ||*ReCon*|||| |+DAPT|CVPR24|5.61|66.3|56.3| |+PPT|arXiv24|5.58|67.4|57.3| |+PointGST|arXiv24|5.59|67.8|57.9| |+GAPrompt|ThisPaper|**5.51**|**68.0**|**58.0**| ### W3. a disguised way of increasing dataset size, data augment, unfair Our Point Prompt is not data augmentation and the setting is absolutely fair, cause the inputs are same for all methods. The reason is three-folded: First, the Point Prompts are **randomly initialized**, rather than additional sample points from each instance. So, no training nor testing data leakage. Second, Point Prompts are **fixed after trained**, adapting to a specific domain and intensifying discriminant information, not varying for each instance. Finally, comparing to the 1024 points input, 20 prompts are less than **2%**. We provide results on ModelNet40 at 1044 resolution. Nothing changes from 1024 points. |points|Point-FEMAE|+IDPT|+DAPT|+Point-PEFT|+GAPrompt| |:-----:|:---------:|:---:|:---:|:---------:|:-------:| |1024|94.0|93.4|93.2|94.3|94.5| |1024+20|94.0|93.4|93.2|94.3|94.5| ### W4. point prompt number as zero in Figure 7 We add more ablation on point prompt number $P$ as below. When $P$ is zero, it equals to drop Point Prompt module. However, we still attains **89.65%** with only other two modules, surpassing DAPT's 88.51% and Point-PEFT's 89.35%. This is attributed to the point-level adaptation of Point Shift Prompter and prompt enhancement of Prompt Propagation, while further gain can be achieved with incorporation of Point Prompt. |$P$|0|5|10|20|30| |:------:|:---:|:---:|:---:|:---:|:---:| |Acc.(%)|89.65|89.76|90.41|90.67|90.24| ### S1. data symbols Thanks for advice, we will detail the symbols in Sec. 3.2 and 3.3 and check out other symbols. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I have carefully reviewed your reply. As you mentioned, GAPrompt is a "fine-grained" prompt, but what does "fine-grained" mean? Is adding some moving points considered fine-grained? Is making the input tokens more dense considered fine-grained? In my view, Point Shift Prompter is simply performing point-level operations on points, and the actual difference from other methods is merely in technical details. As I mentioned before, I still believe this method lacks theoretical or essential innovation, and I consider this work to be incremental. Furthermore, regarding my third question, the authors claimed they only added 2% more points, which is 1024+20 points. However, in Figure 3, the tokens include both raw prompt tokens and new prompt tokens. I believe this is not just about adding 1044 points; the authors have approximately doubled the number of tokens, which clearly increases the model complexity and time consumption during training. For my second question, the authors added experiments on part segmentation and scene segmentation during the rebuttal period, but I found that their performance improvements do not demonstrate significant advantages, and are even weaker than state-of-the-art methods on some metrics. Additionally, when the authors initially submitted their manuscript, they only conducted experiments on ScanObjectNN and ModelNet40 in the main text, which is far from sufficient. This makes me question whether the authors had adequate time to prepare this manuscript. Regarding my fourth question mentioned earlier, when reading this manuscript, I found the methods section difficult to follow because symbols in Sections 3.2 and 3.3 are mixed together without providing explanations for these symbols. Also, the organization of the methodology section is confusing. In Section 3.1, Equation 2 references Point Shift Prompter, but Point Shift Prompter is described in Section 3.2. Why not describe Point Shift Prompter first, and then describe Point Prompt? Based on the above, I believe this manuscript lacks innovation, has limited experimental data, and confusing writing. It does not meet the publication requirements of this conference. I am adjusting my score to reject. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer **4ih6** for the additional feedback. However, we respectfully disagree with several of your assessments and would like to clarify your mistakes. **1. On the novelty of fine-grained prompting** You questioned the definition of "fine-grained" and suggested that operating at the point level is not fundamentally different from prior approaches. We respectfully clarify that our *fine-grained* prompting refers to **injecting learnable prompts at the point-level**, in contrast to *coarse-grained token-level* prompting as used in existing works like IDPT [ICCV'23], DAPT [CVPR'24], and concurrent preprints such as PPT and PointGST. This is not a minor implementation detail — it represents a shift in *how and where* prompts interact with the 3D data, enabling **explicit geometry-aware conditioning.** This conceptual distinction is acknowledged by all other reviewers (**WVi3**, **LHwj**, **7QpT**), none of whom considered our work incremental. We also note that in your initial review, you **highlighted as Strength 1**: *"The author proposes GAPrompt, a novel geometry-aware prompt learning method tailored for pre-trained 3D vision models."* We appreciate this recognition and are **surprised by the subsequent reversal** in your updated review. **2. On the number of tokens and computational cost** You stated that our method doubles the number of tokens, which is **factually incorrect**. As clearly detailed in both our main paper and our response to Reviewer LHwj Q2, we only add **10 prompt tokens** to the **128 input tokens** in the transformer — a less than 8% increase. Figure 3 is a **conceptual illustration**, not a quantitative depiction. Inferring token counts from a schematic rather than the explicitly stated numbers in the paper and response is **speculative and misleading**. The actual token counts and associated computational costs are precisely reported in **Table 1**, where GAPrompt incurs only **0.2 GFLOPs** of overhead (5.0G vs. 4.8G), achieving the **highest efficiency** among prompting baselines. You also referred to the addition of **20 learnable points** as data augmentation in your initial comment. We respectfully disagree. These points are *learned parameters*, not randomly sampled or augmented data — they are optimized end-to-end to encode task-relevant geometric priors, fundamentally differing from augmentation strategies. **3. On experimental sufficiency and performance** You claimed that the performance gains are not significant and suggested that our experiments were insufficient. However, GAPrompt outperforms existing methods across **4 datasets** and **4 backbones**, achieving **12 state-of-the-art results**, while the strongest concurrent preprints (PPT and PointGST) achieve only 2 each. Though preprints are unofficial, we still included comprehensive comparisons in the rebuttal per your suggestion. We also added **part segmentation and scene segmentation experiments** during the rebuttal phase. GAPrompt achieves **6 additional SOTA metrics** with only **0.3M** additional parameters (relative to 5.2M of the downstream model), highlighting both generalization and parameter efficiency. Both **Reviewer LHwj** and **Reviewer 7QpT** considered our experimental validation **solid and comprehensive**, which we believe reflects the strength and rigor of our empirical results. **4. On writing clarity and organization** You noted that the methods section was hard to follow due to unexplained symbols, but **did not** indicate any specific symbol or notation, without concrete examples. We welcome detailed suggestions but **a vague comment without specifics** is difficult to address. As for the structure of Sections 3.1 and 3.2, our organization follows a standard top-down design. Section 3.1 introduces the overall pipeline, while Section 3.2 delves into the Point Shift Prompter in more detail. This structure is conventional in ML papers and was positively received by **Reviewers 7QpT** and **LHwj**, both of them explicitly praised the **clarity and readability** of the paper. In summary, while we appreciate your efforts in reviewing our work, we respectfully believe that your revised evaluation does not align with the evidence presented in the paper and rebuttal. We hope this response helps clarify the novelty, efficiency, and completeness of our method.
null
null
null
null
null
null
GraphGPT: Generative Pre-trained Graph Eulerian Transformer
Accept (poster)
Summary: The paper introduces a self-supervised generative pre-trained model called GraphGPT based on a transformer architecture, which is called Graph Eulerian Transformer and employs a graph to sequence method which uses Eulerian paths. This method ensures reversibility in the graph-to-sequence translation. The transformer is first pre-trained for two kinds of self supervised tasks (next token prediction and scheduled masked token prediction) and then fine-tuned on downstream graph-, edge- and node-level tasks. The paper claims that the proposed method outperforms SOTA on Open Graph Benchmark datasets. Claims And Evidence: - “Randomly sample one valid path from possible candidates, introducing stochasticity as a data augmentation strategy akin to computer vision techniques (Perez & Wang, 2017).” -> It lacks explanation of why there is a need to introduce stochasticity when doing path identification - “Introduce cyclic re-indexing:... where r is a random integer and N (hyperparameter) exceeds the maximum node count. This ensures uniform token training by distributing index frequencies.” -> It is not clear what is the relation between index frequencies and token training and not clear how re-indexing helps with that. - “Connect components by adding synthetic edges between randomly selected nodes.” -> What is the criteria with which nodes are chosen for synthetic edges? Methods And Evaluation Criteria: It makes sense. However it would be better to expand and specify how the linear scheduling function for Mask Scheduling (SMTP) is implemented. Is it the same as the one proposed by Chang et. al? In case, specify. Theoretical Claims: They seem to be correct. Experimental Designs Or Analyses: Please check the comments. Supplementary Material: I have reviewed the material. Relation To Broader Scientific Literature: I think this work contributes to some novel mechanisms to solve graph problems. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - Performance: demonstrates performance across multiple graph tasks, being close and sometimes higher than SOTA. - Generalization: The experimental results are strong in many settings including datasets from different domains. Weaknesses: - Size to performance ratio: while it is able to compare to and sometimes beat SOTA, it needs to be noted that other models achieve the same performance with a number of parameters two orders of magnitude smaller than GraphGPT. - In the benchmark only specific types of GraphGPT models are used. For instance in table 2, only GraphGPT B is used and not GraphGPT M. - It is unclear what is intended by model scalability - section 3.3; additionally scalability seems to not be the answer to the problem if we take into account costs and computational resources required to solve the tasks. Other Comments Or Suggestions: Please see below. Questions For Authors: - In section 2.1 when talking about the implementation steps: why is there a need to do node reindexing? - How do you choose which nodes to connect when adding synthetic edges? - What linear scheduling function do you use for mask scheduling in SMTP? - In section 3.3. What do you mean by “scales [...] to 2 billion parameters”? Can you elaborate on that? - In section C.4, which k do you use for your token vocabulary (also with respect to section 2.2.3) - How is the prompt structured? How do you express the task to solve? - How do you evaluate the correctness of the response? Do you query the Transformer model again with additional information if the response is not correct? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the constructive feedback. Please let us know if there are any concerns that have not been addressed. Q1. Explanation of why there is a need to introduce stochasticity when doing path identification. A1: GraphGPT lacks some inductive biases inherent to GNNs (e.g., node permutation invariance). Randomly sampling Eulerian paths per epoch forces the model to learn invariance between different paths of the same graph, akin to how ViT (lacking CNN’s inductive biases) benefits from large-scale data and data augmentation. Empirically, this reduced overfitting on molecular datasets. While we did not include an explicit ablation study due to space constraints, we acknowledge its importance and will clarify this in the final version. Q2. Unclear relation between index frequencies and token training, and how re-indexing helps. A2: Without cyclic re-indexing, Eulerian paths would always start with low-index tokens (e.g., 0, 1, 2), leading to skewed token frequency distributions. Cyclic re-indexing randomizes starting indices (e.g., selecting from {0,1,…,255} for N=256), ensuring uniform training across all index tokens. This is critical for datasets like Triangles, where test graphs have significantly more nodes than training graphs (e.g., test graphs up to 100 nodes vs. training graphs ≤25 nodes). Without re-indexing, higher-index tokens (e.g., 25–255) remained undertrained, degrading performance. We will expand these details in the appendix. Q3. Criteria for choosing nodes for synthetic edges? A3: Synthetic edges are added to connect disconnected components. For example, if a graph has disconnected components A, B, and C, we connect A-B via a random node pair, then B-C similarly. The synthetic edges are tagged with a special token <edge_jump> to distinguish them from real edges. This ensures the graph becomes connected, enabling Eulerian path generation. We will clarify this in the text. Q4. Is SMTP linear scheduling function implementation the same as Chang et al.? A4: Yes, we use the linear scheduling function from MaskGIT (Chang et al., 2022), defined as γ(r) = 1 − r, where r ∈ [0, 1) and is uniformly distributed. We will explicitly state this in the final version. Q5. Size to performance ratio. A5: We clarify parameter counts and performance across datasets below: - Graph-Level (Tables 1–2): GraphGPT’s parameter counts are comparable to prior SOTA (e.g., 113.6M vs. 86M for GPTrans). - Edge-Level (Tab.4): For ogbl-ppa, GraphGPT-B (145.3M) is a bit worse than Refined-GAE (295.8M), but GraphGPT-XXL (2B) achieves the highest performance. For ogbl-citation2, GraphGPT-M (46.8M) and GraphGPT-B (133.1M) outperform MPLP (749.8M). - Node-Level (Tab.5): GraphGPT requires larger parameters on ogbn-proteins and ogbn-arxiv. This may reflect insufficient pre-training data for these tasks, leading to suboptimal parameter utilization. Q6. In the benchmark only specific types of GraphGPT models are used. For instance in table 2, only GraphGPT B is used and not GraphGPT M. A6: We tested models of size S/M/B for most datasets (e.g., PCQM4M-v2 and ogbl-ppa). Omitted results were excluded from the main text due to space constraints but did not alter the conclusions. These results will be added to the Appendix in the final version. Q7. model scalability-section 3.3. A7: Our investigation of model scalability serves two critical purposes: 1. Studying performance limits reveals fundamental insights of data. Even small performance gains can reduce real-world validation costs [3]. 2. This study aligns with foundational NLP scaling law research [1,2], aiming to catalyze similar investigations for graph-structured data. Q8. Why node reindexing in sec. 2.1? A8: Re-indexing nodes reduces overfitting. Ablation experiments confirm its effectiveness: re-indexing increased the training loss but improved validation/test performance. This ablation study, initially omitted for brevity, will be included in the Appendix. Q9. k value in sec. 2.2.3/C.4? A9: For the datasets ogbl-ppa/citation2, ogbn-proteins/arxiv, we set k=2, resulting in vocabulary sizes of 41,634, 25,687, 31,360, and 25,600, respectively. Q10. How is the prompt structured? How do you express the task to solve? A10: We do not use prompts. Instead, tasks are encoded via specialized tokens appended to the input sequence and processed by an additional MLP head during fine-tuning as discussed in sec. 2.3.2. A figure illustrating the implementations will be added to the Appendix in the final version. Q11. Correctness of the response? Iterative querying? A11: The model directly outputs predictions via the task head during inference. Results are evaluated using standard metrics (e.g., MAE, accuracy) for the downstream task. Each test/valid instance is processed once; no iterative querying is performed. [1] Kaplan et al., arxiv:2001.08361 [2] Hoffmann et al., arxiv:2203.15556 [3] https://ogb.stanford.edu/docs/linkprop/#ogbl-ppa
Summary: The paper presents GraphGPT, a novel self-supervised generative pre-trained model for graph learning that utilizes a new architecture called the Graph Eulerian Transformer (GET). The GET integrates a transformer architecture with a graph-to-sequence transformation method based on Eulerian paths, allowing for the reversible conversion of graphs into sequences of tokens representing nodes, edges, and attributes. The model is pre-trained using two self-supervised tasks: next-token prediction (NTP) and scheduled masked-token prediction (SMTP). GraphGPT is then fine-tuned for various downstream tasks, including graph, edge, and node-level predictions. The experimental results indicate that GraphGPT achieves state-of-the-art performance on multiple large-scale Open Graph Benchmark (OGB) datasets, particularly excelling in molecular property prediction and protein-protein interaction tasks. Notably, the model can scale to 2 billion parameters while maintaining performance increase, addressing scalability issues faced by traditional Graph Neural Networks (GNNs) and prior graph transformers. However, the paper could benefit from improved clarity in presentation, theoretical grounding, and methodological details to enhance its impact and applicability in diverse domains. Claims And Evidence: The authors provide extensive experimental results demonstrating that GraphGPT outperforms existing methods on various benchmark datasets. However, the claims regarding the theoretical underpinnings of the model needs to be further elaborated. For example, why the author choose the Eulerian path for graph-to-sequence transformation and how it ensures lossless and reversible mapping. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The use of self-supervised pre-training and the task-agnostic fine-tuning approach aligns well with the objectives of improving graph representation learning. The benchmarks selected for evaluation, including PCQM4Mv2 and ogbl-ppa, are relevant and widely recognized in the field. Theoretical Claims: The paper does not present formal proofs for its theoretical claims, particularly regarding the lossless and reversible mapping of graphs to sequences using Eulerian paths. Experimental Designs Or Analyses: The experimental designs and analyses appear sound, with a variety of datasets used to evaluate the model's performance across different tasks. Supplementary Material: The supplementary material includes detailed information on datasets, implementation details, and additional experimental results. This material is well-organized and provides valuable insights into the methodology and findings. Relation To Broader Scientific Literature: The contributions of this paper are well-positioned within the broader scientific literature on graph learning. The authors reference key prior works, situating their approach as an advancement in the adaptation of transformer architectures to graph data. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Paper Strength** 1. Innovative methodology to convert graphs into sequences using Eulerian paths and using transformer architecture effectively capture graph structure. 2. Strong empirical findings demonstrating state-of-the-art performance across various benchmarks. 3. Comprehensive evaluation and analysis of the model's scalability and performance on large-scale datasets. **Paper Weakness** 1. Presentation needs to be improved for clarity and readability. For example, how to conduct the re-index and cyclic re-index in the graph-to-sequence transformation? 2. The theoretical bounding of the model's claims is not sufficiently detailed. Why Eulerian path is chosen and how it ensures lossless and reversible mapping? 3. Methodology lacks clarity in certain aspects, particularly regarding tokenization and feature usage. How to handle non-text-based features is unclear. 4. The significance of transferability across different domains is not convincingly established. 5. The model did not compare with other pre-trained-based graph models to highlight the advantages of GraphGPT. Other Comments Or Suggestions: - A preliminary section is recommended to provide a background and connection to the proposed method, which can help readers understand the motivation and significance of the work. - The presentation of the methodology can be improved to better illustrate the proposed method. Questions For Authors: 1. Why did you choose the Eulerian path for graph-to-sequence transformation? How does it ensure lossless and reversible mapping? 2. How do you conduct the re-index and cyclic re-index in the graph-to-sequence transformation? What are their differences? 3. Can you provide more details on the tokenization process used, particularly for non-text-based features? 4. Can you compare with other pre-trained-based graph models to highlight the advantages of GraphGPT? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your questions. Our responses are as follows: Q1. Why did you choose the Eulerian path for graph-to-sequence transformation? How does it ensure lossless and reversible mapping? A1: The Eulerian path was selected for its ability to traverse each edge exactly once, enabling a sequential representation that preserves graph topology without redundancy. As detailed in Section 2.2.1, this approach guarantees lossless and reversible mapping by construction, ensuring the sequence-to-graph conversion retains full structural fidelity. Theoretical justification is provided in the final paragraph of Section 2.2.1. Q2. How do you conduct the re-index and cyclic re-index in the graph-to-sequence transformation? What are their differences? A2: Implementation details for both methods are outlined in Section 2.2.1 (Node Re-indexing) and visualized in Figure 1. Re-indexing assigns fixed node IDs based on traversal order, while cyclic re-indexing dynamically rotates starting nodes to ensure uniform appearance frequencies of node-index tokens during training. Their comparative impacts on model performance are analyzed in A2 and A8 in our response to the 4th review. Q3. Can you provide more details on the tokenization process used, particularly for non-text-based features? A3: Non-text attributes (e.g., numerical or categorical features) are discretized into tokens via binning or directly, as described in Section 2.2.1 (Attribute Handling). Appendix D provides concrete examples. Notably, the benchmark datasets (OGB) lack text features, so our focus centers on structured numerical/categorical attribute processing. Q4. Can you compare with other pre-trained-based graph models to highlight the advantages of GraphGPT? A4: While models like GraphBERT [1], GraphMAE [2], and GCC [3] employ graph pre-training, they primarily target small-scale datasets. GraphGPT’s evaluation focuses on large-scale OGB leaderboard benchmarks, where existing pre-trained models lack competitive entries. Our comparisons align with state-of-the-art baselines dominating these leaderboards, emphasizing scalability and performance on real-world graph tasks. [1] Zhang et.al, GRAPH-BERT. arxiv:2001.05140 [2] Hou et.al, GraphMAE. (KDD2022) [3] Qiu et.al, GCC. (KDD2020)
Summary: The authors in this paper introduce GraphGPT for graph learning that leverages Graph Eulerian Transformer (GET). The proposed model uses a graph-to-sequence transformation method based on Eulerian paths, enabling it to convert graphs into token sequences for transformer-based processing. Claims And Evidence: C1: GraphGPT excels with large-scale data but there is a lack of how GNNs perform to compare with. C2: While GraphGPT enables a lossless and reversible graph-to-seq transformation, how well does it do this in real-world noisy graphs? Methods And Evaluation Criteria: Yes, but it would be helpful to the readers if the authors could also include a runtime comparison with GNNs. Theoretical Claims: Yes, but there is very little validation to test if the lossless property will also hold true in large, noisy, real-world datasets. Experimental Designs Or Analyses: Yes, the authors have very clearly demonstrated the performance of GraphGPT against several large-scale datasets that align with real-world applications. Supplementary Material: Yes, datasets, models and implementation details. Relation To Broader Scientific Literature: GraphGPT has been demonstrated to overcome the typical GNN limitations of over-smoothing and over-squashing and reduce the need for computing adjacency matrices by using Eulerian paths. It brings graph pre-training closer to transformers but may require more rigorous evaluation in interpretability, robustness, and graph generation quality. Essential References Not Discussed: None. Other Strengths And Weaknesses: This is a novel approach with scalability to billions of params. However, there is lack of clarity on the computational costs given the scalability and the robustness of the proposed model/method. Other Comments Or Suggestions: There are some minor typos in the paper, for example, in Section 1, "(semi- \n )Eulerian paths" that can be revised. Questions For Authors: Q1: How robust is the model to adversarial graph perturbations? Q2: Can GraphGPT generate graphs that match real-world constraints (e.g., chemical validity)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the valuable feedback. Our responses are as follows: Q1. GraphGPT excels with large-scale data but lacks comparisons with GNNs. A1: We compared GraphGPT with multiple GNN baselines (e.g., GCN, GIN, GCN-VN, GIN-VN) in all experiments. These baselines are standard in graph learning literature (e.g., [1, 2, 3]). Results are mostly organized in tables with GNN baselines, followed by graph transformer baselines, then GraphGPT. We will clarify this in table captions in the final version. Q2. While GraphGPT enables a lossless and reversible graph-to-seq transformation, how well does it do this in real-world noisy graphs? A2: While not the focus of this paper, we tested GraphGPT on an internal noisy graph dataset (3.1M graphs, avg. 24.8 nodes, 54.7 edges) for edge denoising. Using a semi-supervised node classification task, GraphGPT achieved 10-20% F1 score improvement over baselines. We formulated the task analogously to POS tagging in NLP, leveraging token-level embeddings. The "long" variant outperformed "short" (see Fig. 1) likely due to its edge-agnostic token embeddings of nodes. Results were robust enough for online deployment. Q3. Include runtime comparisons with GNNs. A3: Runtime comparisons are typically parameter-count based in literature. However, we will add runtime benchmarks for GNN baselines (from cited papers) and GraphGPT in the appendix. Q4. Yes, but there is very little validation to test if the lossless property will also hold true in large, noisy, real-world datasets. A4: The lossless property is theoretically guaranteed by Eulerian path theory, independent of noise. Empirical performance on noisy graphs (as in A2) demonstrates practical robustness. Q5. This is a novel approach with scalability to billions of params. However, there is lack of clarity on the computational costs given the scalability and the robustness of the proposed model/method. A5: Computational costs for PCQM4M-v2 are discussed in Section 4. We will expand appendix details for other datasets. Robustness: most results show low variance, indicating robustness across runs. For adversarial/noisy robustness, see A2 and A7. Q6. There are some minor typos in the paper, for example, in Section 1, "(semi- \n )Eulerian paths" that can be revised. A6: Corrected and will review other typo errors. Q7. How robust is the model to adversarial graph perturbations? A7. Adversarial robustness is a promising research area across NLP, CV, and graphs [4-7]. While not our primary focus, preliminary results on noisy graphs (A2) suggest robustness through large-scale training. A deeper study would bridge GraphGPT’s transformer architecture with adversarial graph defenses, an encouraging future direction. Q8. Can GraphGPT generate graphs that match real-world constraints (e.g., chemical validity)? A8. While generation is not the primary focus, preliminary experiments show GraphGPT can generate valid molecules after pre-trained on PCQM4M-v2. However, generation quality depends on hyperparameters (e.g., temperature, top-p, iteration count T). Unconditional/conditional generation and diversity control require further study, which is planned for future work. References [1] Chen et al. GPTrans (IJCAI 2023), [2] Masters et al. GPS++ (TMLR 2023), [3] Hussain et al. Triplet Interaction (ICML 2024), [4] Guo et al. Gradient-based Adversarial Attacks (EMNLP 2021), [5] Shao et al. On the Adversarial Robustness of ViT (NeurIPS 2022 workshop), [6] Jin et al. Adversarial Attacks and Defenses on Graphs (SIGKDD Explorations), [7] Sun et al. Adversarial Attack and Defense on Graph Data (IEEE TKDE2023) --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and concerns. I will maintain my recommendation.
Summary: The paper "GraphGPT: Generative Pre-trained Graph Eulerian Transformer" proposes GraphGPT, a self-supervised generative pre-trained model for graph learning. The core contribution is the Graph Eulerian Transformer (GET), which enables transformers to process graph-structured data efficiently by converting graphs into sequence representations using Eulerian paths. Claims And Evidence: Authors mentioned that "GraphGPT scales to over 2 billion parameters with sustained performance gains."​, but some plots about the scaling laws would make this claim stronger. Methods And Evaluation Criteria: Yes. But evaluate GraphGPT on real-world citation networks (e.g., PubMed, Cora) or social networks (e.g., Twitter, Facebook graphs) could be great. Theoretical Claims: Correct Experimental Designs Or Analyses: 1. No mention of computational resources. 2. Three runs might be too few for high-variance tasks. Supplementary Material: No Relation To Broader Scientific Literature: 1. GNNs struggle with long-range dependencies due to repeated message passing, leading to over-smoothing and over-squashing. GraphGPT circumvents this limitation by tokenizing graphs into sequences via Eulerian paths, enabling transformer-based models to process entire graphs without localized message passing. 2. GraphGPT extends self-supervised pretraining techniques from NLP (e.g., BERT, GPT-3) to graphs. It introduces Next-Token Prediction (NTP) and Scheduled Masked-Token Prediction (SMTP), adapting masked language modeling (MLM) techniques for graphs. Essential References Not Discussed: No Other Strengths And Weaknesses: The structure of the paper is kind of messy, which may make audience harder to follow. Also the the model has too many footnotes in the table, I spend some time to figure out the meaning of each footnotes. It would be helpful to mention the meaning in the caption, or refer to the definition of footnotes. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive feedback. Our replies are as follows: Q1. Authors mentioned that "GraphGPT scales to over 2 billion parameters with sustained performance gains.", but some plots about the scaling laws would make this claim stronger. A1: We appreciate the suggestion to strengthen our scaling analysis. Unlike NLP data, graph datasets lack uniformity, making it impractical to pre-train a single model across diverse domains (e.g., social networks vs. molecular graphs). As a result, GraphGPT is pre-trained and fine-tuned separately for different domains. For datasets like PCQM4Mv2, we observe performance saturation at 227M parameters (Table 1), while for ogbl-ppa, we scale up to 2B parameters. The pre-training loss decreases steadily with increasing model size (Figure 3, Appendix, page 19), mirroring trends in NLP scaling studies (e.g., Fig.1 of Llama1 [1]). Fine-tuning results for three model sizes are reported in Table 4 for brevity, but we will include results for all six sizes (4M–2B parameters) in the Appendix. While a comprehensive scaling law analysis (e.g., estimating model/data scaling exponents) is beyond this paper’s scope, we will add logarithmic plots of pre-training loss and fine-tuning metrics versus non-embedding parameter counts to the Appendix, analogous to NLP scaling plots (e.g., Figure 1 of [2]). Q2. Yes. But evaluate GraphGPT on real-world citation networks (e.g., PubMed, Cora) or social networks (e.g., Twitter, Facebook graphs) could be great. A2: We evaluated GraphGPT on large-scale real-world citation networks: ogbn-arxiv (169K nodes, 1.17M edges) and ogbl-citation2 (2.93M nodes, 30.6M edges). These datasets are significantly larger than traditional benchmarks like Cora (2.7K nodes, 5.4K edges) and PubMed (19.7K nodes, 44.3K edges), aligning with our focus on scaling to massive graph data. We chose these datasets because GraphGPT’s performance benefits from large-scale pre-training data to learn inductive biases (e.g., node permutation invariance). For instance, pre-training on the small Triangles dataset (45K graphs) yielded poor fine-tuning results (32.6%), whereas scaling pre-training data improved performance to 99% (Section 3.2.1). This mirrors the trend in ViT, which outperform CNNs only with sufficiently large datasets [3]. While GNNs may outperform GraphGPT on small datasets like Cora or PubMed, our goal is to demonstrate scalability for large-scale graphs—a critical challenge in modern applications. We will clarify this rationale in the final version. Q3. No mention of computational resources. A3: We have included computational resource details for the PCQM4Mv2 dataset in Section 4.2 (Limitations: Computational Cost). To address this feedback, we will expand this discussion in the revised version to provide comprehensive resource metrics (e.g., GPU types, training time, memory usage) for all key experiments, including ogbn-proteins/arxiv and ogbl-ppa/citation2 datasets. This information will be added to the Appendix to ensure transparency. Q4. Three runs might be too few for high-variance tasks. A4: We clarify that variance is inherently low for most large-scale datasets (e.g., PCQM4Mv2, ogbl-ppa). For these datasets, 3–5 runs consistently yield minimal variance (as shown in tables). (It is common practice not to report the variance for PCQM4Mv2.) For the Triangles dataset, variance is higher—particularly on OOD test data. So we conducted 10 runs to ensure robustness. As shown in Table 3, GraphGPT pre-trained on large-scale data achieves superior performance with reduced variance (e.g., 58.96 ± 1.9 vs. 54.76 ± 7.24). To improve clarity, we will explicitly state the number of runs in table captions or footnotes where applicable. Q5. The structure of the paper is kind of messy. A5: We appreciate your input and welcome specific suggestions to improve clarity. Could you clarify which aspects of the structure are most problematic (e.g., section organization, flow of technical details in Sections 2.2–2.3, or appendices)? For instance, if the nested content in Sections 2.2 and 2.3 is unclear, we will reorganize subsections to enhance logical progression. We are committed to refining the structure to improve accessibility for readers. Q6. Too many footnotes in the table. A6: We apologize for the confusion caused by the table’s formatting. To conserve space and ensure clear citations, we used numerical superscripts to reference source papers (similar to [4]) and subscripts to denote model sizes (detailed in Appendix Tab. 10). We agree that this notation requires clarification and will incorporate explicit definitions of these notations directly into the table captions in the revised version. [1] Touvron et al., LLaMA 2023 [2] Kaplan et al., Scaling laws 2020 [3] Dosovitskiy et al., ViT. (ICLR2020) [4] Hussain et al., Triplet Interaction (ICML2024) --- Rebuttal Comment 1.1: Comment: Thanks authors for the rebuttal, my concerns has been solved, after considering with other rebuttals, I will recommand this paper to be accepted.
null
null
null
null
null
null
Sparse Autoencoders for Hypothesis Generation
Accept (poster)
Summary: This paper proposes a hypothesize generation method by training a sparse auto encoder to find text examples trigger the same neuron which interpreted as containing the same human concepts, and then leverage on LLM to generate the interpretation of the neuron from these examples. The paper performs experiments on both synthesized datasets and real-world datasets, and show results that the method has better accuracy and efficiency than the SOTA. ## update after rebuttal The authors have addressed my major concerns and questions, and I increased my rating to accept. Claims And Evidence: The claims in this paper are overall convincing, with some claims overstated or need more supports: 1. In introduction, the paper claims it "clarifies the challenge" of context window and reasoning limitations in LLM-based approach for hypothesize generation, however, the method proposed here is also subjected to the context window limit when generating interpretations for each neuron. 2. In section 4.1., it claims the dimension of z_i to be empirically "often highly interpretable, corresponding to human concepts", but without any supporting evidence or reference. The validity of this statement is critical for the entire work and deserve more attention. Methods And Evaluation Criteria: For the method, using sparse auto encoder to find text examples that may carry the same concept makes much sense, but interpreting each single neuron of the activation matrix to be a nature language concept is problematic: neurons are discrete, but language concepts are not. Furthermore, there is no convergence analysis on the concepts (labels) of neurons that interpreted by LLM. Since LLM has to interpret a neuron base on small number of examples that activate the target neuron, it should provide more insight to show how stable are the the interpretations when the examples are changed, or even just change the temperature. Theoretical Claims: Not. Experimental Designs Or Analyses: Yes, checked the datasets and metrics. Supplementary Material: B.2. Neuron interpretation Relation To Broader Scientific Literature: It improves LLM-only hypothesize generation approaches by using sparse autoencoder to generate interpretable feature representation (SAE neurons) instead of directly prompting an LLM. Essential References Not Discussed: Not in my awareness. Other Strengths And Weaknesses: Strengths: - The paper is well structured and presented. - The method has good computational efficiency. Weakness: - Interpreting the neurons with LLM requires per dataset manual efforts to tune the bin percentiles from which to sample highly-activating and weakly-activating examples, making the method less generalizable. - Lack of human evaluation for the interpretation quality of the LLMs and the generated hypothesize. Other Comments Or Suggestions: in equation (6), the `z` should be `z_i` Questions For Authors: - Does the H from "HYPOTHESAES outputs H natural language concepts (line 188)" equal the dimension M of the activation matrix, Z_SAE? And how do you determine M? - Did you observe any hallucinations from the LLM when labeling neurons? If so, how did you address this? - How sensitive is the method to different sparsity levels (k in TopK activation)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed, positive review. We are glad you find the claims convincing, the paper well-presented, and the method efficient. To reply to your questions: (1) “The method proposed here is also subjected to the context window limit[...]” Thanks for pointing this out; we will clarify in the paper. HypotheSAEs doesn't run into context window limit issues in practice because it applies LLMs only to *interpret* SAE neurons, for which the LLM requires only a few examples (we test this in C.2). In contrast, prior methods use LLMs to *learn* the interpretable features in the first place, which requires reasoning over many examples. (2) “it claims the dimension of z_i to be [...] highly interpretable [...] without any supporting evidence” Thanks for this point. Supporting references are in Sec 1+2.2, and we will include them here as well. However, this sentence is meant to motivate using SAEs, rather than to prove this claim; our experiments on real datasets demonstrate it holds in our setting. One quantitative metric (see response to R2) is that interpreting SAE neurons yields much higher fidelity vs. interpreting embeddings directly (F1: 0.84 vs. 0.54). (3) “Interpreting each single neuron of the activation matrix to be a nature language concept is problematic” Thanks; we agree this is a key challenge, which is why we conduct extensive experiments to maximize fidelity to the underlying neuron (Appendix C). Empirically, Figure 3 demonstrates that the loss in predictive performance due to using discrete concepts to approximate the neurons is quite small on average (-2.4%). (4) “How stable are the interpretations when the examples are changed” Thanks for this question. We ran a new experiment to measure stability. For 100 random neurons on the Yelp dataset, we generated 3 interpretations with different random seeds, and computed text embeddings of all interpretations. The mean pairwise cosine similarity of two interpretations generated for the same neuron is high: 0.84 (vs. 0.34 for a pair of interpretations from two different neurons). This is despite two sources of randomness—sampling different examples and an LLM temperature of 0.7—showing that the underlying concept learned by each neuron is durable. Here is a set of interpretations for a random neuron (can share more upon request): Neuron 50 (stability: 0.76): ['discusses the seasoning of food, either praising it as perfect or criticizing it as lacking', 'mentions seasoning or lack of seasoning in the food', 'mentions seasoning or lack thereof in the context of food flavor'] (5) “Interpreting the neurons with LLM requires per dataset manual efforts to tune the bin percentiles”; “How sensitive is the method to different sparsity levels (k in TopK)”. Thanks for these questions about hyperparameters. We ran some experiments and found that results are not particularly sensitive; they work well with defaults: Headlines, using default bin [90, 100] instead of [80, 100] as in paper: AUC 0.70, 11/20 significant (still beats all baselines) Yelp, using default k=8 instead of k=32 as in paper: R^2 0.77, 14/20 significant (~identical to original) We provide hyperparameter guidance for practitioners in Appendix B, with more detail in the Python package we released publicly (which, unfortunately, we aren't permitted to link here). (6) “Lack of human evaluation for the interpretation quality of the LLMs and the generated hypothesize” Thank you; in light of this comment, we conducted a qualitative human eval where we asked three computational social science researchers (not involved with the paper) to evaluate all significant hypotheses on the Headlines and Congress datasets. We followed prior HCI work (Lam et al. 2024, “Concept Induction”) and asked them to annotate for “Helpful” and “Interpretable” hypotheses. We use the median of the three ratings. HypotheSAEs substantially outperforms baselines in terms of raw counts and percentages: 24/30 (80%) are rated helpful, and 29/30 (97%) are interpretable. Results plot: https://imgur.com/a/qw6bt3s We also spoke to domain experts about novelty; see our reply to R3. We hope these findings increase your confidence that our hypotheses are high-quality. (7) “Does the H [...] equal the dimension M” No: M is the total number of SAE neurons, from which we select H predictive neurons to interpret. We choose it based on the validation AUC of how well the SAE neurons predict *y* (see B.1 + our repo). (8) “Did you observe any hallucinations from the LLM when labeling neurons?” ~ We did not observe hallucinations in the usual sense, but some interpretations did not describe the neuron well, which is why we generate multiple interpretations and choose the highest fidelity one. In practice, our results are not very sensitive to this step (see B.2). Given the strengths you highlight, and our experiments to address your comments, would you consider raising your score? If not, do you have further questions?
Summary: The paper presents HYPOTHESAES, a three-step method (SAE-based feature generation, feature selection, and LLM-based feature interpretation) for hypothesis generation that identifies interpretable relationships between text data and a target variable. The approach leverages sparse autoencoders (SAEs) to learn meaningful, human-interpretable features, which are then used to generate hypotheses through feature selection and large language model (LLM)-based interpretation. Key Contributions 1. Theoretical Framework: The authors establish a formal framework for hypothesis generation, introducing a "triangle inequality" that relates the predictiveness of learned features to their interpretability. 2. Algorithm - HYPOTHESAES: A three-step approach: Feature Generation: Train a sparse autoencoder on text embeddings to create interpretable neurons. Feature Selection: Identify predictive neurons using an L1-regularized regression. Feature Interpretation: Use an LLM to label neurons with natural language descriptions, forming hypotheses. Main Findings Improved Hypothesis Generation: HYPOTHESAES outperforms recent LLM-based hypothesis generation methods and traditional topic modeling approaches. Efficiency: Requires 1-2 orders of magnitude less computation than LLM-driven baselines. Interpretability: The sparse autoencoder structure ensures that identified features align with human-interpretable concepts. Claims And Evidence: Most claims are convincingly supported, particularly those regarding performance gains, computational efficiency, and the effectiveness of SAEs. Novelty claim could benefit from additional validation, especially through expert evaluation. While the method identifies hypotheses that were not explicitly found in prior studies, the paper does not provide direct human validation of the novelty and importance of these hypotheses. “Broad applicability beyond text-based tasks claim” is overstated. The method is only tested on text-based hypothesis generation tasks. There is no evidence that it generalizes to: Healthcare (e.g., clinical note analysis) Biology (e.g., scientific literature mining) The claim would need domain-specific evaluations before being fully credible. Methods And Evaluation Criteria: Yes, they do. Theoretical Claims: The theoretical framework (Triangle Inequality for Hypothesis Generation) is a fundamental contribution but it’s weakly supported: The theoretical proposition (Proposition 3.1) is an interesting insight, but: The empirical validation is indirect—while the model works well, the necessity of this specific theoretical bound in practical performance is unclear. Experimental Designs Or Analyses: The evaluation metrics, statistical tests, and experimental setups are generally valid and well-designed. However, a key limitation is the lack of human evaluation to assess the novelty of the generated hypotheses. Supplementary Material: Yes, I reviewed all the supplementary materials in the appendix, including Additional Synthetic Experiments, Hyperparameter Settings & Training Details, Labeling Fidelity and LLM-Based Interpretation, Cost and Runtime Analysis, and Theoretical Analysis. Relation To Broader Scientific Literature: This paper synthesizes ideas from hypothesis generation, sparse representation learning, and interpretability research to create a scalable and structured approach for discovering meaningful insights from text data. Unique contributions to the broader literature include: 1. Combines sparse autoencoders with LLM interpretation for efficient hypothesis generation. 2. Establishes a theoretical link between interpretability and predictiveness in machine learning models. 3. Demonstrates real-world applications that extend findings in political science, marketing, and behavioral research. Essential References Not Discussed: It's better for the authors to discuss the difference between the hypothesis generation scenario in the paper and more complex scientific hypothesis generation scenario in the papers below: Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers arXiv preprint arXiv:2409.04109 Learning to generate novel scientific directions with contextualized literature-based discovery arXiv preprint arXiv:2305.14259 Scimon: Scientific inspiration machines optimized for novelty arXiv preprint arXiv:2305.14259 Researchagent: Iterative research idea generation over scientific literature with large language models arXiv preprint arXiv:2404.07738 Other Strengths And Weaknesses: Strengths 1. Originality (a) Novel combination of sparse autoencoders and hypothesis generation. The paper introduces a creative synthesis of ideas from sparse representation learning, interpretability research, and hypothesis generation. Prior work has focused on either interpretable feature extraction (SAEs) or LLM-driven hypothesis discovery, but this paper combines the two, making hypothesis generation more scalable and efficient. The triangle inequality formulation (Proposition 3.1) provides a new theoretical perspective on the trade-off between feature predictiveness and interpretability. (b) Breaks from fully LLM-driven approaches. Recent LLM-based hypothesis generation methods (e.g., HYPOGENIC, NLPARAM) require high computational cost, limiting their scalability. By decoupling feature learning from LLM inference, HYPOTHESAES provides a more practical and resource-efficient solution. 2. Clarity (a) Well-structured and clearly written. The paper provides a step-by-step explanation of the method (Figure 1) and offers intuitive interpretations of the results. The supplementary material includes detailed experimental settings and theoretical justifications, ensuring reproducibility. (b) Strong theoretical grounding. Proposition 3.1 is well-motivated, and the proofs in the appendix provide a rigorous foundation for the method’s effectiveness. Weaknesses Issues listed in the above sections. Other Comments Or Suggestions: The introduction of the datasets should be in more formal language. For example, around line 263, 268, 274 etc., the sentences can be modified to be: We utilize 200k reviews for training, 10k for validation, and 10k for held-out evaluation. As the appendix is long, it would be better to have a table of contents to better organize everything and add a brief paragraph about all the prompt templates, etc. Questions For Authors: 1. What if we directly use pretrained LLMs to perform the same task and use them as one of the baselines? It seems pretrained LLMs can also perform the task and offer some self-explanations. I am curious about the performance and the cost. 2. How do you define the novelty of a generated hypothesis? While the method identifies hypotheses that were not explicitly found in prior studies, the paper does not provide direct human validation of the novelty and importance of these hypotheses. 3. Can you provide any supporting evidence that the method can be applied to more complicated real-world hypothesis generation in other fields like healthcare? Or can you discuss the difference between complicated scientific hypothesis generation using LLMs in the papers I mentioned above and the proposed method? 4. It would be better if the authors could add more discussion about the necessity of the specific theoretical bound (proposition 3.1) in practical performance. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the detailed review and suggestions. We're glad you found our method original, clearly presented, theoretically grounded, and computationally efficient, with the performance gains convincingly supported. To respond to your questions and suggestions: (1) “ ‘Broad applicability beyond text-based tasks claim’ is overstated”. Could you clarify which part of the paper you’re referring to (we couldn’t find this quote directly)? We agree that we do not provide evidence that the method generalizes beyond text-based tasks; we would be happy to revise specific areas where you thought this was implied. (2) “What if we directly use pretrained LLMs [...] as one of the baselines?” We included a baseline, HypoGeniC, which uses pretrained LLMs directly. This method performed worse across 14 of 15 quantitative comparisons and was ~10x more expensive and ~30x slower, owing to the need to make many LLM calls to select candidate hypotheses. If you have specific suggestions for other ways to use pretrained LLMs, please let us know! (3) “Novelty claim could benefit from additional validation, especially through expert evaluation.” Thank you; based on this suggestion, we reached out to two experts to assess the novelty of our findings on the Headlines dataset (Table 1 of the paper). In particular, we asked them whether the three hypotheses most negatively associated with engagement—"addressing collective human responsibility or action", "mentioning environmental issues or ecological consequences", "mentioning actions or initiatives that positively impact a community"—were novel or reported previously. They did not provide papers with these specific findings; for example, one expert noted, “I don't know of any papers specifically looking at [the hypotheses you mention]”. Both experts pointed us to theories that are broadly supported by these findings: e.g., Robertson et al. (2023) find that negativity drives news consumption, which is consistent with the third hypothesis. In light of your and R4’s comments, we also conducted a human eval where we asked three computational social science researchers (not involved with the paper) to evaluate all significant hypotheses on the Headlines and Congress datasets. We followed prior HCI work (Lam et al. 2024, “Concept Induction”) and asked them to annotate for “Helpful” and “Interpretable” hypotheses. We use the median of the three ratings. HypotheSAEs substantially outperforms baselines in terms of raw counts and percentages: 24/30 (80%) are rated helpful, and 29/30 (97%) are interpretable: [Hypothesis Human Eval - Imgur](https://imgur.com/a/qw6bt3s) We hope these new findings increase your confidence that the results from HypotheSAEs are (1) novel, as per our correspondence with experts; and (2) helpful and interpretable, as per our human evals. (4) You asked about “the necessity of the specific theoretical bound (proposition 3.1) in practical performance.” We agree the theoretical bound is not strictly necessary for practical performance, but rather serves as a broader motivation for the procedure, as you note. It also provides us a way to conceptually decompose hypothesis generation performance into (1) neuron predictiveness, and (2) interpretation fidelity. C.3 explores this empirically. (5) You asked about “the difference between ... the paper and more complex scientific hypothesis generation scenario in the papers below.” Thank you for these references, which we will include in an additional related work paragraph titled “Automated Idea Generation”. We agree that these literatures are both in service of helping researchers conduct science, but they address different tasks: Our work is focused on solving the problem: “given a dataset of texts and a target variable, what are the human-understandable concepts that predict the target?” In contrast, the literature you mention focuses on: “given a corpus of prior scientific papers, can we propose promising research ideas?” Practically, the former task emerges when a researcher knows what they are studying and have collected data, but they need tools to make sense of the data. The latter task emerges when a researcher is trying to decide *what* to study based on prior literature. Methodologically, the former task involves methods like clustering & interpretability, while the latter involves methods like prompting & RAG. We think these two literatures are distinct, but complementary; for example, a political scientist might use an idea generation method to decide to study partisan differences in social media posts, and then, after collecting a dataset, use HypotheSAEs to propose data-driven hypotheses for further validation. (6) More formal language; appendix table of contents: Thank you! We will make these edits. Given the strengths you highlight, and our experiments to address your comments, would you consider raising your score? If not, do you have further questions? --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and thoughtful rebuttal. You've addressed most of my concerns. My main remaining concern is the claim regarding the method’s broad applicability beyond text-based tasks, specifically the statement in the conclusion: "The method applies naturally to many tasks in social science, healthcare, and biology." Based on the datasets used in the paper, the method seems best characterized as addressing tasks of the form you metioned: given a dataset of texts and a target variable, what are the human-understandable concepts that predict the target—a formulation may be commonly studied in social science. I think this task essentially corresponds to a classification task with explanations. However, in real-world hypothesis generation, for example, in domains like healthcare and biology, hypotheses are often more open-ended and complex, and may not be reducible to this specific task formulation. As such, I believe the claim of broad applicability is somewhat overstated. It may be clearer and more accurate to explicitly define your hypothesis generation task as one focused on discovering interpretable predictors from labeled text data. And since no empirical evidence is provided for applications in healthcare or biology, it might be more appropriate to reserve that discussion for the impact statement rather than the main conclusion. --- Reply to Comment 1.1.1: Comment: We’re happy to hear that we addressed most of your concerns. Thank you for explaining this further; we agree with your point that the conclusion is too broad, and we will revise as follows: - We will remove "_The method applies naturally to many tasks in social science, healthcare, and biology_” and replace it with “_The method applies naturally to settings with a large text corpora labeled with a target variable of interest._” - We will revise the citations to point to clear examples of tasks where one can run our method: Ziems et al., 2024 (a review of tasks and datasets across many disciplines within social science); Bollen et al. 2010 (social networks); Card et al. 2022 (political science); Breuer et al. 2025 (media studies). These settings are related to the ones we study and fall within our task formulation. - We will clearly delineate non-text datasets and more specialized settings as room for future work not covered by our paper: "_An exciting direction for future work, which we do not explore here, is extending our method to non-text modalities (e.g., proteins or images: Vig et al. 2021; Dunlap et al. 2023) as well as more specialized text domains (e.g., healthcare: Hsu et al. 2023, Robitschek et al. 2025)._" This is our final chance to reply, but if we have addressed your concerns in these two sets of revisions, would you consider increasing your score? While we won’t be able to reply, feel free to suggest additional tweaks that you think will add further clarity, and we will strongly consider incorporating them. Thanks again for your comments, which have strengthened the paper.
Summary: This paper addresses the task of using LLMs to take a labeled text dataset and propose natural language hypotheses predicting those labels. The performance of hypotheses is measured by having an LLM evaluate each prompt according to the explanations, producing a boolean vector. Using this vector, a linear regression is learned to predict whether the output is true or false, and this is scored in several ways - recovering known hypotheses on synthetic datasets, getting many statistically significant hypotheses, having high AUC, etc. The authors implement several existing baselines. Their key contribution is a new method based on sparse autoencoders on text embeddings. They use the OpenAI text embedding API and train a narrow sparse autoencoder with approximately a thousand units. They then identify which units are predictive of the labels using L1 regularised logistic regression and select the most predictive ones. They then use auto-interp on the top units to generate 3 hypotheses (GPT-4o) score them (GPT-4o mini) and then validate the predictive power of the hypotheses on a held-out validation set, and with a Bonferroni correction for multiple hypothesis testing. They find that this significantly outperforms existing (largely LLM-based) baselines across most metrics. Finally, they present several qualitative examples of the explanations produced. These explanations broadly seem reasonable and interpretable, are often more specific than prior methods would give, and on some well-studied datasets give what the authors consider to be novel hypotheses. Claims And Evidence: Yes Methods And Evaluation Criteria: Seems reasonable, though I am not very aware of the right baselines and implementation for this task Theoretical Claims: Briefly Experimental Designs Or Analyses: The HypotheSAEs method is sound Supplementary Material: Not much Relation To Broader Scientific Literature: Sparse autoencoders have been a major focus of the mechanistic interpretability field over the past year. A major open question is whether sparse autoencoders are practically useful on downstream tasks in a way that beats baselines, and there have been many recent negative results. In my opinion, as an interpretability researcher, this paper is highly significant because it is the most compelling example I have yet seen of sparse autoencoders beating baselines on a real task that people have actually tried to solve with other methods. Furthermore, this kind of exploratory analysis and hypothesis generation is exactly the kind of thing where I would expect sparse autoencoders to add significant value, as they are able to surface unexpected hypotheses. Essential References Not Discussed: No Other Strengths And Weaknesses: As discussed above, this paper is a compelling example of sparse autoencoders (SAEs) beating baselines on a real task. This is the best example I have seen on an extremely important question, and for this reason, I consider this work a strong accept. Not only are SAEs substantially cheaper than the existing modern baselines, but they also performed substantially better. Enough qualitative examples were provided that I am reasonably satisfied that real hypotheses are being surfaced rather than spurious findings. Similarly, it was able to recover the correct hypotheses on the synthetic tasks. The statistical analysis and effort put into baselines seemed of fairly high quality, though I found it difficult to judge the details here, as I am not familiar with this task under these baselines. Other Comments Or Suggestions: A few things could improve the paper further: I would love if you could take a dataset where your method seems to generate novel hypotheses, like the Congress one, and show them to a domain expert. Have them evaluate whether these are interesting and novel, and whether they seem plausible. If they say yes, I am much more impressed by your results. It would be worthwhile to try simpler baselines, such as: - Using the PCA basis rather than the SAE basis - Decision trees - Logistic regression with L1 on bag of words - Frequency analysis of words or bigrams - Verifying that these don't work well I'm concerned that with any complex baseline, it could be easy to overcomplicate things or misimplement them. We might have the wrong hyperparameters, and it's good to try a range of styles, just in case. I'd also be excited to see whether [matryoshka SAEs](https://www.lesswrong.com/posts/rKM9b6B2LqwSB5ToN/learning-multi-level-features-with-matryoshka-saes) help here. You mentioned in the appendix that sometimes you train several SAEs of different sizes, and matryoshka is designed to remove the need for that by allowing concepts in a range of different granularities to be learned in the same SAE. I would be keen to see the authors open source this code and make it easy for other people to work with. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the thoughtful and positive review. We’re especially glad to read that as an interpretability researcher, you found our work to be “highly significant because it is the most compelling example I have yet seen of sparse autoencoders beating baselines on a real task that people have actually tried to solve with other methods.” Indeed, we agree that recent literature is mixed on what value SAEs contribute, and we see the method as highlighting an area in which they have a real advantage: the ability (as you note) to find interpretable and *unexpected* patterns in data. We will further highlight this contribution in the paper. We also thank you for suggesting simpler baselines, which we agree would clarify the importance of the SAE. We ran several of them on the three real-world datasets (Headlines, Yelp, Congress). Besides the feature generation step, the baselines are identical to our method. **Embedding**: Run Lasso to select 20 dimensions directly from embeddings. **PCA**: Transform embeddings to PCA basis (k=256, which explains ~80% variance), then select 20 dimensions using Lasso. **Bottleneck** (suggested by R1): Fit a simple neural network which uses the embeddings as input, a hidden 20-neuron “bottleneck” layer, and predicts the target as output. Then interpret the 20 bottleneck neurons. (This is conceptually similar to NLParam, but simpler.) | Method | Total Count | Average Predictiveness | Average Fidelity | |------------|-------------|------------------------|------------------| | SAE | **45/60** | **0.717** | **0.836** | | Embedding | 30/60 | 0.706 | 0.545 | | PCA | 23/60 | 0.697 | 0.547 | | Bottleneck | 27/60 | 0.699 | 0.640 | Metrics are combined/averaged across the three datasets. Recall that **count** is the number of significant hypotheses in a multivariate regression; **predictiveness** is the AUC (or R^2 for Yelp) using all hypotheses together; **fidelity** is the F1 score for how well the interpretation matches the neuron activations. Using the SAE produces many more hypotheses which are significant in a multivariate regression (45/60 across the three datasets for SAE vs. 30/60 for the next best baseline); slightly higher predictiveness; and much higher interpretation fidelity. This is consistent with our qualitative finding that SAE neurons fire on more specific, distinct concepts than embeddings. Specificity permits the high-fidelity interpretations, and distinctiveness results in the wide breadth of hypotheses. While these baselines perform reasonably well in terms of predictiveness, for downstream utility, predictiveness is only a prerequisite: we ultimately want hypotheses that are non-trivial and non-redundant (as is discussed in Sec 6.2). We believe the results of these baselines clarify the value of the SAE in our setting, and we are excited to include them in the updated version of the paper. Regarding your suggestion to fit an n-gram/bag-of-words model: in the paper, we perform a related analysis for the Congress dataset, using n-gram results from Gentzkow et al. (a seminal, widely-cited analysis). They fit an L1 regression using bigrams and trigrams, and report 600 predictive ones. Controlling for counts of the n-grams on this list, we find that our hypotheses improve AUC from 0.61 to 0.74 and 28 out of 33 remain significant. This suggests that n-grams aren’t sufficient to cover our hypotheses. Qualitatively, we find hypotheses which we think n-grams can’t easily capture, like “criticizes the allocation of resources or priorities by the government, particularly highlighting disparities between domestic needs (e.g., education, healthcare, energy) and actions abroad (e.g., Iraq)”. Thank you also for the pointer to Matryoshka SAEs. We’ve been excited about these as well, and plan to add them to our codebase and try them. We agree they may reduce the need to stitch together SAEs of different sizes. We agree that working with domain experts would be a valuable direction to increase confidence in our results. In the last week, we took some initial steps here (more detail in the response to R3): we (1) confirmed that two domain experts did not know of specific work producing several of our findings on the headlines dataset (the researchers also noted that some of these findings support theories in psychology) (2) conducted a human eval for whether our hypotheses are helpful and interpretable, which produced strongly positive results (https://imgur.com/a/qw6bt3s). We share your desire to make our code easy to work with. In fact, we’ve released a public codebase with a pip-installable package, though unfortunately we are not allowed to link to it (even anonymously). Thank you again for your suggestions, and we’re really happy to see that you found the paper to make a significant contribution to interpretability research. --- Rebuttal Comment 1.1: Comment: Thanks for the follow-up experiments, I think they strengthen the paper's results. I've read the other reviews and stand by my score (though, naturally, cannot increase it)
Summary: This paper proposes a method to generate hypotheses using SAEs. The first step of the method involves generating interpretable features by training SAEs on feature embeddings. The second step involves identifying which features are predictive for a task, using Lasso. Finally, the third step involves using LLMs to generate human interpretable natural language explanations for the identified predictive features. The experiments show that this method identifies better hypotheses than baselines, while requiring 1-2 orders of magnitude less compute. Claims And Evidence: The claims made by this submission are very clear and are well-supported by evidence. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem and application at hand. Theoretical Claims: I did not formally check the correctness of the theoretical claims. Experimental Designs Or Analyses: I did not find any issues with the soundness or validity of the experimental designs. Supplementary Material: I did not carefully review the supplementary material. Relation To Broader Scientific Literature: One of the key contributions of the paper is proposing a compute-efficient method to perform hypothesis discovery using LLMs. While some previous methods (Zhou et al, "Hypothesis Generation with Large Language Model", 2024) entirely rely on LLMs to propose hypotheses for relationships between two target variables, and other methods (Zhong et al, "Explaining Datasets in Words: Statistical Models with Natural Language Parameters", 2024) require large amounts of compute to achieve this task; this paper performs the same task with about 1-2 orders of magnitude reduction in compute. Another contribution is using SAEs in a systematic manner to identify hypothesis. While most work in the SAE literature focusses on improving or measuring the fidelity of the representations, this paper makes use of imperfect interpretations to generate hypothesis; while being able to reason about the corresponding fidelity of the hypotheses generated. Essential References Not Discussed: None that comes to mind. Other Strengths And Weaknesses: Strengths: + This paper proposes a systematic framework to generate and identify hypotheses using SAEs / Lasso, along with LLMs in the loop. The paper is overall well-written and its goals are clearly stated. The biggest advantage of this method is that it seems to use significantly less compute than the baselines; making this a useful contribution to the field. Weaknesses: - **Missing non-SAE baseline**: The proposed method consists of three independent steps; the first involves training an SAE, the second involves identifying suitable features via a Lasso regressor. Finally, an auto-interpretation step is proposed to interpret the identified features in terms of natural language explanations. It would be great to have an ablation analysis that tests the utility of step 1, i.e., using SAEs features. For example, - why not use trained bottleneck features like NLParam?, or - directly use Lasso on top of the pre-trained language embedding features, skipping the SAE step altogether? These ablations would help assess the importance of the SAE. Other Comments Or Suggestions: As a suggestion, the phrase "triangle inequality" may not be appropriate for the result in Proposition 3.1; as such a result requires three quantities in a shared space (like a field or a vector space) and corresponding notions of distance between them. While the proposed theory is useful, using the term "triangle inequality" might lead to confusion, and I suggest that the authors reconsider its usage in the paper. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and positive review. We’re glad that you found the claims of the paper to be very clear and well supported. We agree that a significant benefit of our method is that it reduces compute requirements by 1-2 orders of magnitude, and believe this will facilitate real-world applications. We would like to also emphasize what we see as an even more important benefit of the method: its performance in identifying predictive hypotheses. On synthetic tasks (where we know ground-truth hypotheses), the method outperforms all three baselines on 11/12 metrics. On real-world tasks, the method outperforms all three baselines on 5/6 metrics, generating 45 significant hypotheses out of 60 candidates, while the next best generates only 24. We also appreciate your feedback about the phrase “triangle inequality.” Perhaps we could call it a “sufficient condition for hypothesis generation.” We would be curious for your thoughts on this phrase. An alternative could be to drop the phrase. We also thank you for suggesting ablations, which we agree would clarify the importance of the SAE. On the three real-world datasets (Headlines, Yelp, Congress), we ran both ablations you mentioned—using embeddings directly and fitting a bottleneck—and a third suggested by R2, which selects features from an embedding PCA. We keep other steps of our method fixed. **Embedding**: Run Lasso to select 20 dimensions directly from embeddings. **PCA** (Suggested by R2): Transform embeddings to PCA basis (k=256, which explains ~80% variance), then select 20 dimensions using Lasso. **Bottleneck**: Fit a simple neural network which uses the embeddings as input, a hidden 20-neuron “bottleneck” layer, and predicts the target as output. Then interpret the 20 bottleneck neurons. (This is conceptually similar to NLParam, but simpler.) | Method | Total Count | Average Predictiveness | Average Fidelity | |------------|-------------|------------------------|------------------| | SAE | **45/60** | **0.717** | **0.836** | | Embedding | 30/60 | 0.706 | 0.545 | | PCA | 23/60 | 0.697 | 0.547 | | Bottleneck | 27/60 | 0.699 | 0.640 | Metrics are combined/averaged across the three datasets. Recall that **count** is the number of significant hypotheses in a multivariate regression; **predictiveness** is the AUC (or R^2 for Yelp) using all hypotheses together; **fidelity** is the F1 score for how well the interpretation matches the neuron activations. Using the SAE produces many more hypotheses which are significant in a multivariate regression (45/60 across the three datasets for SAE vs. 30/60 for the next best baseline); slightly higher predictiveness; and much higher interpretation fidelity. This is consistent with our qualitative finding that SAE neurons fire on more specific, distinct concepts than embeddings. Specificity permits the high-fidelity interpretations, and distinctiveness results in the wide breadth of hypotheses. While these baselines perform reasonably well in terms of predictiveness, for downstream utility, predictiveness is only a prerequisite: we ultimately want hypotheses that are non-trivial and non-redundant (as is discussed in Sec 6.2). We believe the results of these baselines clarify the value of the SAE in our setting, and we are excited to include them in the updated version of the paper; thank you for suggesting them! We believe that we have now addressed the only weakness you raised (comparison to additional baselines) and were hoping, in light of this, that you would be willing to raise your score. If not, please let us know if there are further concerns we could address or experiments we could conduct. Thank you again for your suggestions, which have strengthened the paper! --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and for performing the non-SAE baseline experiments! It is indeed an interesting and very surprising finding that the SAE seems critical for proposing predictive hypotheses. These results may make a great addition to the paper. Overall, I maintain my current rating.
null
null
null
null
null
null
Winner-takes-all for Multivariate Probabilistic Time Series Forecasting
Accept (poster)
Summary: This paper addresses a time-series forecasting problem, where the model generates multiple forecasting for each timestamp. The authors propose TimeMCL, a method based on the Multiple Choice Learning (MCL) paradigm that can output multiple plausible forecasting with multiple prediction heads and score heads. TimeMCL is trained with Winner-Takes-All (WTA) loss and score head loss to produce diverse forecasting from multiple heads due to the nature of the WTA loss, computing the gradient for the head with minimum loss value. The authors claim that TimeMCL can be viewed as a functional quantizer with the theoretical analysis. Experimental results on multiple datasets demonstrated that the proposed method performs better than the baselines in terms of the Distortion metric while it performs comparably with the baselines in terms of the standard metrics. Claims And Evidence: * In Eq.5, why is not x_{t-1} inputted to the function gamma? * In l.134 of the right-hand side in p.2, why is not x_{t-1} inputted to the function f^k_theta? * The authors mentioned that the score head can avoid overconfident heads, but how they avoid them is not described. * Can we not replace min in the WTA loss with the gamma and jointly train with the WTA loss, which can be a more straightforward approach? It can be similar to Eq.9. * Proposition 5.2. is analyzed only with the binary cross entropy for the score heads. However, s in TimeMCL are shared in the score heads and heads, and TimeMCL is trained based on the compound loss with the WTA loss, which is not the direct case of Proposition 5.2. Methods And Evaluation Criteria: * Why is the Distortion fair metric? How is the Distortion computed for the baselines? Can we use score head to choose the hypotheses and use the standard metrics? * The paragraph "Comparing TimeMCL with the baselines on standard metrics" describes only Tables 6 and 7 in the Appendix. It cannot be appropriate in terms of the page limit regulation. Theoretical Claims: * The descriptions around l.215 of the right-hand side in p.4 are not self-complete. For example, z is not defined. Experimental Designs Or Analyses: Please see Methods And Evaluation Criteria. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: TimeMCL, a method based on the Multiple Choice Learning (MCL) paradigm, can be novel and practically important. Essential References Not Discussed: NA Other Strengths And Weaknesses: Clarity issues: * In l.82 of the right-hand side in p.2, the subscripts may be wrong in the equation. * In l.87 of the right-hand side in p.2, the right most "..." may be unnecesally. * The character T is used in multiple meanings in forecasting horizon and annealed temperature. * Figures 1 and 2 appear in reversed order. Other Comments Or Suggestions: NA Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive feedback on the paper. > In Eq.5, why is not $x_{t-1}$ inputted to the function gamma? This is because, with our notations, $\gamma^k_{\theta}$ corresponds only to the head. The full model writes as $\gamma_\theta^k \circ s_\theta$, where $s_{\theta}$ is the backbone (See section 3,4 and Figure A of the [rebuttal pdf](https://anonymous.4open.science/r/TimeMCL_ICML-E616/TimeMCL_ICML.pdf)). > In l.134 of the right-hand side in p.2, why is not $x_{t-1}$ inputted to the function $f^k_\theta$? Same as above. This is because, with our notations, $f^k_{\theta}$ Only the head. The full model writes as $f_\theta^k \circ s_\theta$, where $s_{\theta}$ is the backbone. This will be made clearer with an illustration as suggested by Reviewer uVhD. > The authors mentioned that the score head can avoid overconfident heads, but how they avoid them is not described. Overconfidence is a known issue in Multiple Choice Learning (Rupprecht et al., Lee et al), where, some heads associated with very low probability zones may not be distinguishable from plausible hypotheses at inference time. Score-heads allow to solve this issue by learning the probability of each head to avoid this situation. > Can we not replace min in the WTA loss with the gamma and jointly train with the WTA loss, which can be a more straightforward approach? It can be similar to Eq.9. Does the reviewer suggest training a loss that looks like a weighted sum of the $L_{\theta}^{k}(x_{1:t_{0}-1},x_{t_{0}:T})$ by the $K$ predicted scores? The suggestion of the reviewer is very interesting, as it would when the hypotheses are fixed, encourage the score associated with the lowest distance to increase (and the other to decrease). However, this would make the prediction head loss dependent on the values of the scores, which may produce different training dynamics as the current model. In the current version, the score head objective depends on the position of the hypotheses, but not the opposite. This would definitely be a promising try for further work. > Proposition 5.2. is analyzed only with the binary cross entropy for the score heads. However, s in TimeMCL are shared in the score heads and heads, and TimeMCL is trained based on the compound loss with the WTA loss, which is not the direct case of Proposition 5.2. Indeed, Proposition 5.2 is analyzed only with the binary cross entropy for the score heads. In this proposition, we implicitly assume that the prediction heads have already converged, so that the task of learning the probability mass of each cell becomes duable for the prediction heads. Indeed, in according with Letzelter et al. (2024b) (Section C.1.2), we observed the WTA training scheme leads to a fast convergence of the predictions, while the scoring heads are slightly slower to train because they need the prediction heads to have already converged to do so. This will be made clearer in the assumption of this Proposition 5.1. > Why is the distortion a fair metric ? The Distortion is known from the quantization literature (Pagès.G.,2015), as a way to assess the quality of a set of $K$ samples $z_k$, $k = 1,...,K$, for quantizing a target distribution $p$, with: $$ D_2 := \int_{\mathcal{X}} \min_{k=1, \ldots, K} \left\| z_k - x \right\|_{2}^2 \mathrm{d}p(x). $$ In our setup, the distortion we are considering is the generalization of the above where the samples $z_k$ are functions of the context. It writes in the context of time series $$ \int_{\mathcal{X}^{T}} \min_{k=1, \ldots, K} \left\| z_k\left(x_{1: t_0-1}\right) - x_{t_0: T} \right\|^2 \; \mathrm{d}p(x_{1: t_0-1}, x_{t_0: T}) \simeq \frac{1}{N} \sum_{i} \min_{k=1, \ldots, K} \left\| z_k\left(x^i_{1: t_0-1}\right) - x^i_{t_0: T} \right\|^2. $$ where $(x^i_{1: t_0-1},x^i_{t_0: T})$ are samples from $p(x_{1: t_0-1},x_{t_0: T})$. It implicitly assesses how well the predicted samples cover a target distribution with a given set of samples. The distortion is a fair metric provided that the same number of hypotheses is used for each baseline, as the distortion is expected to improve with $K$, as per the Rate distortion curve (Gray, 1989) in the optimal case. Gray, Robert M. Source coding theory. Vol. 83. Springer Science & Business Media, 1989.
Summary: The paper presents a new method, TimeMCL, for time series forecasting. The proposed method uses Multiple Choice Learning using Winner-Takes-All (WTA) loss to forecast multiple plausible time series future. The paper uses synthetic data to show that TimeMCL is a functional quantizer. The proposed TimeMCL is compared with two Diffusion methods on standard datasets. Claims And Evidence: 1. The main claim that TimeMCL is a functional quantizer is supported by mathematical proofs and the use of synthetic data 2. The forecasting ablility of TimeMCL is supported by comparing with 2 diffusion methods Methods And Evaluation Criteria: The proposed method is evaluated on standard datasets. The comparisons are done with 2 SOTA diffusion methods. There should be more comparisons with other SOTA methods Theoretical Claims: The paper presented proofs and used synthetic data to support TimeMCL as a Functional Quantizer. I did not check the correctness of the mathematics and proofs. Experimental Designs Or Analyses: The datasets used are usually the ones that are used in SOTA time series forecasting methods. Two SOTA diffusion methods are used for comparisons. The results will be more validated if compared with more SOTA methods. Supplementary Material: I have reviewed all the appendices, although not able to check the material equations and provided proofs Relation To Broader Scientific Literature: The paper proposes a WTA approach for time series forecasting and I believe does not relate to broader scientific literature. Essential References Not Discussed: The works referenced in the paper are adequate. Although I believe other SOTA works must be discussed including the Graph Deep learning methods e.g. STGNN. . Other Strengths And Weaknesses: Strengths: 1. Use of synthetic data 2. Mathematical equations and proofs 3. The use of multiple performance criteria Weaknesses 1. Need to consider datasets from more domains e.g. stock market 2. Need to compare with other SOTA time series forecasting methods 3. In comparing with TimeGrade and DeepAr, there is an assumption that these are the 2 best methods for time series forecasting. Other Comments Or Suggestions: 1. Please add a figure that shows the network architecture of the proposed method and a figure that visualy describes the proposed technique. 2. Figures and tables should be closer to the text descriptions e.g. figure 1 is on a different page than the description, same is true of table 2 & 3. Questions For Authors: 1. How will the proposed approach perform on other domains e.g. Stock Market? 2. How are the graph tranformer methods, e.g. STGNN, compare with the diffusion methods in general and with TimeMCL in particular? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their relevant remarks. The [rebuttal pdf](https://anonymous.4open.science/r/TimeMCL_ICML-E616/TimeMCL_ICML.pdf) is attached to the response. Figure A of the pdf will be included in the paper. ### Comparison with more SOTA methods > The results will be more validated if compared with more SOTA methods. We conducted this benchmark using DeepAR and TimeGrad to demonstrate that, with consistent settings across baselines (e.g., backbone, data scaler, training details), our approach offers competitive distortion at a low computational cost. Since we used an RNN backbone, we felt comparing it with other architectures, such as transformers, would complicate conclusions. However, we acknowledge the reviewer’s point that additional baselines could strengthen our work. To address the reviewer's comment and enhance our evaluation, we added additional models. Specifically, we included Tactis-2 (Ashok et al, 2024), a transformer-based model based on non-parametric copulas, and TempFlow (Rasul et al, 2020), which uses conditioned normalizing flows (with both RNN and transformer backbones). For completeness and as suggested by Reviewer 2HFL, we also included exponential smoothing (ETS) as a simple baseline without neural networks. ### Results analysis **Distortion Comparison (Table A)**. * TimeMCL outperforms TempFlow, both when using the same RNN backbone and when TempFlow is based on a Transformer. * Tactis proves to be a strong competitor in terms of Distortion, though at a significantly higher computational cost (see Table H). * We conducted an ablation study on the number of hypotheses (Table E). We observed that TimeMCL consistently achieves the best performance (except when using only one hypothesis). **Inference run-time (Table H)** * Among neural methods, TimeMCL and DeepAR demonstrate the best trade-off between speed and performance. * ETS achieves the fastest inference but exhibits weaker performance on other metrics. **Smoothness Analysis (Table B)** * TimeMCL achieves the best smoothness scores, as measured by Total Variation. * This supports our theoretical claim from Section 5.2, which predicts that TimeMCL generates smoother trajectories. **Additional metrics (Tables C & D)** * * TimeMCL does not significantly improve RMSE, CRPS, or the Energy Score (Table G), as expected, since it does not directly optimize these metrics. We plan to extend this comparison by implementing timeMCL with a transformers-based architecture as the backbone, and adhering to Tactis's training details for more accurate performance comparison. > How are the graph tranformer methods, e.g. STGNN, compare with the diffusion methods in general and with TimeMCL in particular? Regarding spatio-temporal graph transformer methods, does the reviewer have a specific method in mind? Most STGNN methods we found are tailored to specific tasks (e.g., traffic prediction in Luo et al., 2023). However, we did identify stemGNN (Cao et al., 2020), which integrates graph and attention mechanisms and is evaluated on similar data. However, comparing our approach with stemGNN is difficult, as it is non-probabilistic and generates only one prediction per input. As future work, exploring a graph transformer-based approach in place of our RNN backbone could be promising. ### Need for datasets from more domains, e.g., Stock markets > How will the proposed approach perform in other domains e.g. Stock Market? We thank the reviewer for their advice. Accordingly, we performed experiments on a dataset of correlated financial time series, consisting of 15 correlated Cryptocurrencies (See Tables J and K of the rebuttal pdf). On this dataset, TimeMCL was trained with annealed winner takes all loss, and compared with the previous baselines, and the results, presented in Table I further demonstrate the competitiveness of the method in terms of Distortion and Smoothness, with also good CRPS here. We also provide a visualization of the predictions, along with the baselines in Figure B, which is akin to Figure 2 of the main paper. ### Missing references > I believe other SOTA works must be discussed including the Graph Deep learning methods e.g. STGNN. To make our benchmarks more comprehensive, we’ve included Tactis2, TempFlow (with its transformer-based variant), and non-neural exponential smoothing (ETS) methods (Hyndman et al., 2008). Additionally, STGNN will be referenced in the paper. Luo, X., Zhu, C., Zhang, D., & Li, Q. (2023). Stg4traffic: A survey and benchmark of spatial-temporal graph neural networks for traffic prediction Cao, Defu, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong et al. "Spectral temporal graph neural network for multivariate time-series forecasting." In NeurIPS 2020 Rasul, Kashif, Abdul-Saboor Sheikh, Ingmar Schuster, Urs M. Bergmann, and Roland Vollgraf. "Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows." In ICLR 2021 --- Rebuttal Comment 1.1: Comment: After reading author's rebuttal and other reviews, I am updating score to accept
Summary: This work introduces TimeMCL, a time series forecasting model that looks to project plausible future scenarios and their associated probabilities to better forecast multimodal distributions. The model learns multiple heads, as well as scores associated to each head to estimate the probability of a given head being correct. Training these heads with vanilla Winner-takes-all (WTA) loss can result in under-trained heads for unimodal distributions, so the authors explore relaxations that enable more distributed gradients. The authors provide a theoretical analysis of TimeMCL, demonstrating that their combined architecture and training scheme leads to a voronoi tesselation of future trajectories, which they show on synthetic datasets. They also conduct experiments on six typical TS datasets from GluonTS wrt Distortion, FLOPs, RMSE and CRPS. The main claimed contributions are: - the TimeMCL approach that takes a backbone and trains multiple heads for it using (two variations of relaxed) Winner-Takes-All loss (with scoring heads) - the theoretical analysis of TimeMCL as a functional quantizer - the evaluation of TimeMCL with an RNN backbone on synthetic and real-world benchmarks Claims And Evidence: > TimeMCL forecasts diverse possible futures & provides smooth forecasts - The is visualized in figures 2-4. It would be nice to also provide some sort of quantitative measure of diversity here, e.g. Frechet or even just dispersion averaged across series and time points. The same goes for smoothness. > TimeMCL is a stationary conditional functional quantizer - This seems to be true under the proposed assumptions, but it's unclear what value this provides as the underlying clustering problem is NP-hard, and you're dealing with very high-dimensional data. > TimeMCL is compared against SOTA probabilistic forecasters - The TACTiS models that you cited are not compared against, which is odd. - https://arxiv.org/abs/2202.03528 - https://arxiv.org/abs/2310.01327 - Missing simple baselines against which to compare on the real-world datasets, e.g. naive, drift, ARIMA, exp-smoothing, etc. Methods And Evaluation Criteria: The datasets are typical for ML forecasting, although it is difficult to know how much multivariate correlation is present in these datasets. Therefore, performance on synthetic multivariate tasks would be insightful, e.g. correlated brownian motion and/or VAR processes. Theoretical Claims: - I reviewed the proof of proposition 5.1 briefly. I am mostly unfamiliar with the clustering literature, but from what I gather, k-means is NP-hard, so I'm not sure what this formulation actually brings given the extremely high dimensionality of multivariate trajectory data. Maybe in the limit of expressivity and time, this algorithm converges to the optimal voronoi tesselation due to the two-step formulation, but it would help to analyze the rate of convergence theoretically and experimentally. - It's unclear why the architectures for sections 5 and 6 differ. - I did not review the proof of proposition 5.2 Experimental Designs Or Analyses: - A training/inference runtime analysis would be nice in addition to flops - It would be good to add in other relevant baselines, e.g. TACTiS and simple baselines - You could also assess the models using actual multivariate metrics, e.g. energy score, variogram, etc. (see https://arxiv.org/abs/2407.00650) Supplementary Material: Did not review other than for proof of proposition 5.1. Relation To Broader Scientific Literature: This paper relates to a recent line of work about time series forecasting using machine learning methods. Specifically, the paper belongs to two niches: multivariate forecasting and probabilistic forecasting. Cited methods, included DeepAR, TimeGRAD and TACTiS, figure among this line of work as well. Essential References Not Discussed: - Missing reference to distortion being standard in quantization - Other references to multivariate probabilistic forecasting: - https://arxiv.org/abs/2410.02168 - https://ojs.aaai.org/index.php/AAAI/article/view/29085 Other Strengths And Weaknesses: - Extremely clear and well-written paper - The main points that can move my score are: - experiments with multivariate synthetic tasks - multivariate metrics (energy score, variogram) - comparison to TACTiS-2 Other Comments Or Suggestions: Typos: - paragraph at line 84 has some weird verb conjugation, e.g. "one have" - missing period line 100-101 col 2 before "WTA". - the notation, and especially the overload of x, feels a bit clumsy, e.g. on line 112-113 you switch to the superscript for the hypotheses, which took a minute to notice since the subscript is omitted (I imagine to avoid it being too heavy). You can probably just use \mathbf{x} for a vector to prevent the cognitive break during reading. - line 120 col 1: you might want to reiterate that these are the "final projection" heads here instead of just "hidden state representation" - Tables 6 and 7 belong in the main text, as they are key evaluation results with more established metrics than distortion. - Line 167 col 2: "finite over if" - Line 171 col 2: the first clause of this paragraph is incomprehensible. Questions For Authors: > covariates c1:T , the latter being omitted in the following for conciseness can the model accept covariates? --- > To compute TimeMCL metrics while respecting hypothesis probabilities, we resample with replacement from the K hypotheses obtained in a single forward pass, weighting them by their assigned probabilities before computing metrics. Can you explain this is more detail please? I'm not sure I follow, I figured that the score heads provided the probabilities. --- Did you look at methods that might enable a variable number of heads? Seems like the optimal tesselation is conditional on the number of heads, so you can't really "extend" the number of heads. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. The [rebuttal pdf](https://anonymous.4open.science/r/TimeMCL_ICML-E616/TimeMCL_ICML.pdf) is attached to the response. ### Clarification of the theory > TimeMCL is a stationary conditional functional quantizer [...], but it's unclear what value this provides as the underlying clustering problem is NP-hard, and you're dealing with very high-dimensional data. The reviewer raises a valid point. However, as shown in Figure 1, our toy example qualitatively demonstrates that our training scheme closely aligns with the target conditional quantizer in practice, even when forecasting across 250 time steps (see also Appendix B for details). > Maybe [...] this algorithm converges to the optimal voronoi tesselation [...], but it would help to analyze the rate of convergence theoretically and experimentally. Theorem 2 of Loubes & Pelletier (2017) provides an asymptotic upper bound on the distortion error with respect to the number of training pairs in quantizer learning. Extending this result to neural networks trained with WTA Loss is a promising direction for future work. ### Additional baselines and metrics > A runtime analysis would be nice [...]. > It would be good to add in other relevant baselines, e.g. TACTiS and simple baselines > You could also assess the models using actual multivariate metrics, e.g. energy score [...] > It would be nice to also provide [...] a measure of diversity, [...] and smoothness. In response to the reviewer, we added experiments with a simple baseline, ETS exponential smoothing (Hyndman et al., 2008), which does not rely on neural networks. We also included TempFlow (Rasul et al., 2021), a normalizing flow-based using both RNN and Transformer backbones, as well as Tactis-2 (Ashok et al., 2024), a Copula method with a Transformer backbone. These methods were evaluated across the same six datasets. In addition to the metrics from the original paper (Distortion, RMSE, and CRPS-Sum), we followed the reviewers' suggestions and included Inference runtime, Smoothness, and Energy Score. ### Results analysis **Distortion Comparison (Table A,E,F)**. * TimeMCL outperforms TempFlow, and Tactis proves to be a strong competitor in terms of Distortion, though being slower at inference (see Table H). * An ablation study on the number of hypotheses (Table E) shows that TimeMCL consistently achieves the best performance (except with $K=1$). **Inference run-time (Table H)** * Among neural network-based methods, TimeMCL and DeepAR demonstrate the best trade-off between speed and performance. * ETS achieves the fastest inference, with weaker performance otherwise. **Smoothness Analysis (Table B)** * TimeMCL achieves the best smoothness scores, as measured by Total Variation (averaged over predictions). * This supports our theoretical claim from Section 5.2, which predicts that TimeMCL generates smoother trajectories. **Additional metrics (Tables C & D)** * TimeMCL does not significantly improve RMSE, CRPS, or the Energy Score (Table G), since it does not directly optimize these metrics. We did not conduct further experiments with Diversity, as we believe Distortion implicitly captures it. When comparing TimeMCL (using an RNN backbone) with methods using transformer backbones, it's hard to tell if performance improvements are due to the training method or the backbone. To clarify, we plan to implement TimeMCL with a transformer backbone, following Tactis's training details. ### Additional datasets > It is difficult to know how much multivariate correlation is present in these datasets. Therefore, performance on synthetic multivariate tasks would be insightful In response, we conducted experiments on a new dataset of correlated financial cryptocurrencies time series (see Table J), with the correlation matrix in Table K. TimeMCL was trained with an aMCL loss and compared to previous baselines. Results in Table I show that our method remains competitive, excelling in both Distortion and Smoothness, with strong CRPS performance. Visualizations are in Figure B. ### Additional questions > Can the model accept covariates? The model can accept covariates. In previous implementations these typically serve as additional concatenated input features. > To compute TimeMCL metrics [...], we resample with replacement from the K hypotheses obtained [...]. Can you explain this is more detail please? Our implementation extends GluonTS (Alexandrov et al., 2020) with minimal code changes. Instead of modifying evaluation functions, we resample from the K hypotheses, weighted by their probabilities, allowing us to use existing evaluation functions (e.g., CRPS) without rewriting metrics for TimeMCL. > Did you look at methods that might enable a variable number of heads? Indeed, the number of predictions must be predefined beforehand. Exploring dynamic "rearrangements" of hypotheses when adding new ones, without full retraining, is left for future work.
Summary: This paper proposes the idea to generate a diverse set of forecast trajectories instead of a single trajectory as typically done. Prior ideas of doing this involved sampling from the output distribution such as in TimeGrad using a diffusion process or sampling from other models generating a distribution. However such methods do not necessarily produce diverse outputs covering the whole space of outputs. This paper proposes using the idea of winner takes it all (WTA) to learn a tessalation of the output space (similar to K means) using multiple output heads and use it to generate several representative outputs. Experimental results show superior performance metrics compared to other comparable baselines. While the proposed methods do not perform well in terms of CRPS or RMSE, this is expected as the method does not aim at producing mean or median forecasts which minimize traditional losses. Claims And Evidence: All methodological claims are supported by experimental results and analysis showing that they work as intended. Theoretical claims are also evidenced with proofs in the appendix. Methods And Evaluation Criteria: The proposed approach is suitable for the problem studied. WTA has been shown to work for other domains such as vision and this paper shows that it works for time series forecasting as well. The paper uses the standard benchmark datasets for evaluation, and evaluations are extensive and sufficient. Theoretical Claims: Theoretical proofs in the supplementary material were not checked. Experimental Designs Or Analyses: The experimental setup is sound - Using synthetic time series to test the correctness of the approach is reasonable and experimental results show that the method is working as intended. - The results are evaluated using distortion - a reasonable metric to test the accuracy of a diverse set of outputs. Supplementary Material: Reviewed the additional results in the appendix. - additional experiment metrics (RMSE, CRPS) and visualizations. - details of the experimental setup. - details of the synthetic models. Relation To Broader Scientific Literature: The idea is significant and highly relevant to the time series forecasting literature. Generating a diverse set of predictions is a relevant problem in machine learning in general (especially wrt to Generative models), such ideas being extended to forecasting is highly relevant to the time series community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper presents a novel approach to model multi-modal outputs for time series forecasting. As such, i advocate for the acceptance of this paper. However, the motivation behind the applicability of the paper to real life forecasting problems is unclear. Other Comments Or Suggestions: N/A Questions For Authors: It seems that the proposed method is akin to the k means algorithm. K-means is quite sensitive to intialization. Can you comment on the sensitivity of WTA to the initialization of the hypotheses? It is possibly the case that the loss is such (convex or some similar property) that any initialization would lead to the hypothesis shifting appropriately and aligning with the actual distribution during training. What are some important applications for this method in real life scenarios? In what settings would someone want to generate forecasts from multiple hypothesis? An examples that come to mind is stock market predictions - it would be important to look at a diverse set of forecasts to understand any extreme predictions. Or product demand forecasts where a retail company may want to prepare for different scenarios. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their positive feedback and insightful comments. ### Sensibility to initialization > K-means is quite sensitive to initialization. Can you comment on the sensitivity of WTA to the initialization of the hypotheses? As MCL can be seen as a conditional and gradient-based variant of K-Means, it inherits some of its limitations, in particular the sensibility to initialization. This is also related to the known collapse issue (Rupprecht et al.) in the MCL literature, where some of the hypotheses may never be chosen, leading to suboptimal Distortion performance. This is the reason why we decided to leverage WTA variants (aMCL, Relaxed-WTA), for TimeMCL, which makes the algorithm more robust to these issues. Note also that previous works (Letzelter et al., 2023; Shekarforoush et al.,2024) have noticed that this collapse issue can, in some settings, be naturally solved by the randomness of the data distribution. As mentioned in the Limitations paragraph of the submission, further work will include enhanced normalization techniques to further improve the quality of the optimum of the vanilla TimeMCL. ### Motivation clarification > The motivation behind the applicability of the paper to real life forecasting problems is unclear. > What are some important applications for this method in real life scenarios? In what settings would someone want to generate forecasts from multiple hypothesis? Indeed, compared to generative models, we believe time-MCL has the ability to capture rare events or "modes" in the conditional distribution. This is for instance illustrated in Figure 2 (middle) on the Solar dataset where one of the hypotheses is capturing a rare event. This can indeed also be useful for Stock Market prediction, for capturing trend reversal. For clarification, we included a use case with financial data in the [rebuttal pdf](https://anonymous.4open.science/r/TimeMCL_ICML-E616/TimeMCL_ICML.pdf) (See e.g., Figure B) for which details are provided in the answers of Reviewer 2HFL and Reviewer uVhD. Letzelter, Victor, Mathieu Fontaine, Mickaël Chen, Patrick Pérez, Slim Essid, and Gaël Richard. "Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis." In NeurIPS, 2023 Shekarforoush, Shayan, David Lindell, Marcus A. Brubaker, and David J. Fleet. "CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference." In NeurIPS, 2024
null
null
null
null
null
null
Attention-Level Speculation
Accept (poster)
Summary: This paper presents a novel infra method that accelerates the model forwarding (interence time) speed via attention-level speculation. Claims And Evidence: Most claims are convincing. Though I'm not convinced that such error is actually controllable (as shown in the main figure) if the model layer is very deep. Methods And Evaluation Criteria: I think the benchmark results are sufficient. Evaluation metric is correct in terms of effciency. Theoretical Claims: No. I'm not a theory guy and at the same time I don't think the theory introduced will ensure the error is under control with the proposed threshould acceptance method. Experimental Designs Or Analyses: Experiment is appropriate. But I do believe speed results under different model sizes and settings (like memory-bounded and computing-bounded) are necessary. e.g., when the batch size is large, what trend will it show as the accelerators scale up? Supplementary Material: No Relation To Broader Scientific Literature: inference acceleration Essential References Not Discussed: No Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Can you show the error propagation results (or benchmark scores) with larger models with deeper layers? if so, I will be convinced that it's really useful. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your detailed feedback which will greatly improve the quality of the work. ## Error Propagation for Deeper Models To make sure our method works for larger model with deeper layers, empirically, we have conducted experiments on the Llama 3.3 70B model for correctness analysis, which has 80 layers compared to 32 layers in Llama 3.1 8B. We showed that in terms of speculation hit rate and benchmark correctness across different lambdas, the Llama 70B model, despite deeper and larger, manifest the same behaviour as the 8B model. Specifically, we benchmarked the 70B models on GPQA with CoT, GSM8K with CoT, Multi-lingual GSM with CoT in Swahili, and Hotpot QA. The results are shown below: | Config\Tasks | GPQA_COT | GSM8K_COT | MGSM_COT_Swahili | Hotpot QA | |----------|----------|----------|----------|----------| |Baseline| 0.518 | 0.958 | 0.852 | 0.940 | |lambda=0.05| 0.529/55.1% | 0.950/64.5% | 0.856/73.9% | 0.940/32.3% | |lambda=0.10| 0.507/80.0% | 0.951/86.6% | 0.852/90.1% | 0.945/59.0% | |lambda=0.15| 0.458/89.9% | 0.946/93.9% | 0.840/95.6% | 0.935/74.6% | |lambda=0.20| 0.446/95.1% | 0.936/96.8% | 0.820/98.1% | 0.935/83.3% | |lambda=0.25 | 0.379/97.6% | 0.897/98.3% | 0.816/99.1% | 0.935/89.4% | Overall, our new benchmark results empirically confirms that ALSpec works with larger models with deeper layers. ## Theoretical Claims As shown in Appendix A, the worst-case error is in the order of $O(N^2\delta)$, where $N$ is the number of layers and $\delta$ is the threshold, which is certainly a limitation for deeper models. However, with stronger assumptions, the bound only has a dependence on $\sqrt{N}$ which is much more scalable. This is consistent with our new experiment results on the Llama 70B model, which achieved as good correctness as the 8B model despite being much deeper (80 layers) compared to the 8B model (32 layers). ## Impact of Batch Size Speed results under different sizes and settings are indeed important. Our work focuses on a specific case of decode, which is decoding at long context length with small batch size (e.g. batch_size=1). Usually during long context decode, the batch sizes are very small due to limited on-chip memory for KV cache. Since this work already introduces the new idea of ALSpec and our space is already very limited for this 8 pages manuscript, we therefore limited our scope to this scenario. Relevantly, this is the case for long context decode on most accelerators such as H100s or Tenstorrent's N150, where the on-chip memory limits the use of large batch size at long context length.
Summary: LLM is resource-intensive, and serving LLMs is difficult. Model parallelism is bottlenecked by communication, when the communication bandwidth is low, and data parallelism is great at throughput but not good as inference latency. Approximate attention is robust at instruction-following and knowledge retrieval tasks but not good at context-heavy tasks and complex reasoning tasks. The paper uses approximate attention as a way to speed up self-attention in a speculative decoding way but at the per-layer level. In the setup introduced in the paper, it uses two NPU per execution/shard, where one is main, another one is speculative. The main thread will execute their in-house attention kernel that does StreamingLLM attention and the full attention at the same time, the StreamingLLM finishes first, and kick start the FFN computation on the speculative thread, which is on a different NPU. When the full attention finishes, it uses a threshold to check whether the StreamingLLM attention output is close to the full attention. If true, the main thread now becomes the speculative thread, and vice versa. If not, it will continue generate following the original path. The takeaway is that compared with a 8 cards tensor parallelism, the upper bound in efficiency of the method is 4 cards StreamingLLM tensor parallelism. The lower bound in efficiency of the method is 4 cards full-attention tensor parallelism. Because of the hit-rate and lower overhead of communication, the system is shown to beat the tensor parallelism in efficiency, while also preserve the serving efficiency. Claims And Evidence: The important per-condition of the proposed method is that doing tensor parallelism on multiple NPUs has huge communication overhead. The claim is justified in left side of Figure 1. It would be better if the paper also discuss GPUs such as H100s that have maybe higher communication bandwidth and speed to see whether it holds universally. Most claims in the paper are backed strongly by abundance of experiments. Methods And Evaluation Criteria: The method relies on the approximate attention to speculate in the per-layer basis. Also, using two GPU to overlap the FFN (during speculation) and full-attention (main thread) makes intuitive sense. The evaluation is comprehensive for 8B parameter models. It comprehensively covers important aspects of LLMs' performance. Theoretical Claims: The threshold used for branching decision is derived to make sure that the accumulated error at the end of the LLM is bounded. The proof in Appendix A contains no obvious error. Experimental Designs Or Analyses: The advantage of the method: 1. The paper most claims are strongly backed by comprehensive experiments, from approximate attention weakness to end-to-end system speedup and performance. The paper is highly well-written. 2. The method proposed in the paper is novel in design, and the kernel designed is inspiring, as by perturbing the sequence of block computation in Flash Decoding, the kernel can compute both StreamingLLM attention and full attention in the same time without damaging the full attention latency. 3. The per-layer analysis of StreamingLLM replacement is also insightful and can benefit the speculative decoding community. The weakness of the method: 1. The core limitation is that the method effectiveness is doubtful for frontier GPUs with the highest speed communication bandwidth. 2. There is some computation waste during rejection in branch that is innate by design. 3. Another minor issue is that the method might not scale well when the context length goes beyond 128k, which makes full-attention more expensive than FFN, injecting more bubbles to the NPUs. Supplementary Material: I have no comments to the supplementary material. Relation To Broader Scientific Literature: The per-layer speculation and verification with partial attention is quite novel. The per-layer analysis of StreamingLLM replacement (Figure 4) is also insightful and can benefit the speculative decoding community. Essential References Not Discussed: There are some methods before that uses StreamingLLM to do output token-level speculative decoding, although they are very different from per-layer speculative decoding, they should be discussed and cited by the paper. Sun, H., Chen, Z., Yang, X., Tian, Y., & Chen, B. (2024). Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding. arXiv preprint arXiv:2404.11912. Other Strengths And Weaknesses: I have no comments on other strengths and weaknesses. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. Does the method works for frontier devices such as H100s? 2. Can the method scale up potentially to multi-node serving situation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your detailed feedback which will greatly improve the quality of the work. ## GPU/TPU generalization We ran new experiments for a correctness analysis on the Llama 3.3 70B Instruct model, which are more suitable for frontier GPUs like H100 with higher communication bandwidth and speed. We find that in terms of speculation hit rate and benchmark correctness across different lambdas, the Llama 70B model, despite being deeper and larger, manifests the same behavior as the 8B model. Specifically, we benchmarked the 70B models on GPQA with CoT, GSM8K with CoT, Multi-lingual GSM with CoT in Swahili, and Hotpot QA. The results are: | Config\Tasks | GPQA_COT | GSM8K_COT | MGSM_COT_Swahili | Hotpot QA | |---|---|---|---|---| |Baseline| 0.518 | 0.958 | 0.852 | 0.940 | |lambda=0.05| 0.529/55.1% | 0.950/64.5% | 0.856/73.9% | 0.940/32.3% | |lambda=0.10| 0.507/80.0% | 0.951/86.6% | 0.852/90.1% | 0.945/59.0% | |lambda=0.15| 0.458/89.9% | 0.946/93.9% | 0.840/95.6% | 0.935/74.6% | |lambda=0.20| 0.446/95.1% | 0.936/96.8% | 0.820/98.1% | 0.935/83.3% | |lambda=0.25 | 0.379/97.6% | 0.897/98.3% | 0.816/99.1% | 0.935/89.4% | Additionally, we ran a performance analysis serving the 8B models on 4 vs. 8 H100 doing tensor parallelism using the SGLang serving framework and FlashInfer attention backend. Althoguh we don't (yet) have an ALSpec implementation, it provides us an estimation of the performance gain if the method is implemented on H100s. The results are summarized in the table below: | Context Length | 4xH100 Attn Latency (us) | 4xH100 Non-Attn Latency (us) | 4xH100 TP Tok/s | 8xH100 TP Tok/s | TP Scaling | Projected ALSpec @ 65% Hit Rate | ALSpec Scaling | |---|---|---|---|---|----|---|---| |1k| 13 | 95 | 244.6 | 249.3 | 1.9% | 262.0 | 7.1% | |32k| 29 | 100 | 214.2 | 231.3 | 8.0% | 246.0 | 14.8% | |64k| 49 | 100 | 191.4 | 209.7 | 9.6% | 237.8 | 24.2% | |96k| 56 | 100 | 178.2 | 194.2 | 9.0% | 224.9 | 26.2% | |128k| 63 | 101 | 169.3 | 184.8 | 9.2% | 217.5 | 28.5% | Our new estimation on 8xH100s shows that ALSpec would cut latency on Llama 8B by 1.28x compared to 1.09x for full TP at context length 128K. In this case, the attention latency is only 66% of non-attention latency, which means more gains can be achieved for context length beyond 128K. For implementation on GPUs/TPUs, although ALSpec introduces a new attention kernel and run time modification SGDC, it does not fundamentally change the Op by Op and static graph execution style on modern platforms. Therefore, we believe ALSpec is implementable on those platforms by expert kernel writers. We also believe multi-node serving would be ideal for models with more parameters such as Llama 70B, as running these models on a single node with 8 devices TP would still give strong scaling. Only doing TP across multiple nodes would they start to show diminishing return, and we believe that this would be a perfect situation to apply ALSpec. Unfortunately, the resources required for these exepriments are beyond our current experiment setup. However, with the correctness analysis and our implemented kernels and methods, we believe that our approach is scalable to larger models when computing resources are available. ## Scaling Context Length Beyond 128K When full-attention becomes more expensive than FFN, there are bubbles in NPUs. We thank the reviewer for pointing this out. This situation falls into Scenario 2 as depicted in Figure 5. We are aware of this issue and we denote this as one future extension of the work. Since this work introduces the new idea of ALSpec and the content is already very full for an 8-page manuscript, we have limited our scope to Scenario 3 in Figure 5. Fortunately, most widely used modern models have 128K context length and usually have attention running faster than the FFN. ## Additional literature Thank you for the pointer to this valuable reference. We will add a citation and in-text compare/contrast to our camera ready.
Summary: The paper introduces attention-level speculative parallelism (ALSpec), a dynamic method for approximating self-attention in large language models. ALSpec computes an approximate attention output and decides whether to accept the approximation. It uses a specialized flash decode kernel and SGDC with priority gating to overlap expensive attention and feed-forward operations. Experimental results on Llama 3.1 8B model using Tenstorrent NPU devices show significant reductions in latency (up to 5× in attention overhead at long contexts) and improved throughput (about 1.65× speedup at high speculation hit rates) compared to traditional tensor parallelism approaches. ## update after rebuttal The author's response addressed my concerns, leading me to raise my rating from 3 to 4. Claims And Evidence: The claims in this paper are supported by clear empirical observations and experiments. Methods And Evaluation Criteria: Yes. The methods are well-aligned with the goal of reducing inference latency in large language models while preserving output quality. Theoretical Claims: I did not verify the correctness of the proofs, as the determination of appropriate speculation verification threshold values is primarily based on experimental results rather than theoretical derivations. Experimental Designs Or Analyses: I am interested in understanding how the two primary optimization techniques implemented in this paper—namely the Speculative Flash Decode Kernel and SGDC—impact the inference latency and throughput. Supplementary Material: I have reviewed a portion of the appendices, including Appendices C, D, E and F. Relation To Broader Scientific Literature: The paper extends previous work on fixed attention approximations (Attention Sink) and token-level speculative decoding by introducing dynamic, attention-level speculative parallelism (ALSpec). It verifies approximations on the fly with hardware-efficient kernels and SGDC to achieve better performance. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper introduces a dynamic speculative execution framework for self-attention, combining approximate and exact computations with on-the-fly verification, which improves latency without sacrificing accuracy. 2. Although the experiments were conducted on a specialized device (Tenstorrent N150), I believe the proposed methods could be applicable to other computational platforms as well. Weakness: 1. The experimental scope is relatively limited, as it does not encompass larger models, particularly those exceeding 10 billion parameters. 2. The experimental setup is not sufficiently comprehensive. There are no experiments demonstrating how much benefit the Kernel Fusion and SGDC-related optimizations provide in terms of latency and throughput. Additionally, there is no specific comparison showing the performance difference between unfused speculative attention and the fused kernel implementation. 3. The paper does not explain how the proposed method handles different batch sizes. Other Comments Or Suggestions: Clarify the baseline depicted in Figure 2. Questions For Authors: 1. In Figure 8, SP demonstrates scalability advantages compared to TP. Is TP's scalability disadvantage due to communication overhead? 2. Compared to TP, SP seems more similar to DP. In the experimental setup, shouldn't the comparison include DP+TP rather than just Full TP? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your detailed feedback which will greatly improve the quality of the work. # Impact on Larger Models We performed additional experiments in terms of correctness analysis on the Llama 3.3 70B Instruct model. We showed that in terms of speculation hit rate and benchmark correctness across different lambdas, the Llama 70B model, despite deeper and larger, manifest the same behavior as the 8B model. Specifically, we benchmarked the 70B models on GPQA with CoT, GSM8K with CoT, Multi-lingual GSM with CoT in Swahili, and Hotpot QA. The results are shown below: | Config\Tasks | GPQA_COT | GSM8K_COT | MGSM_COT_Swahili | Hotpot QA | |---|---|---|---|---| |Baseline| 0.518 | 0.958 | 0.852 | 0.940 | |lambda=0.05| 0.529/55.1% | 0.950/64.5% | 0.856/73.9% | 0.940/32.3% | |lambda=0.10| 0.507/80.0% | 0.951/86.6% | 0.852/90.1% | 0.945/59.0% | |lambda=0.15| 0.458/89.9% | 0.946/93.9% | 0.840/95.6% | 0.935/74.6% | |lambda=0.20| 0.446/95.1% | 0.936/96.8% | 0.820/98.1% | 0.935/83.3% | |lambda=0.25 | 0.379/97.6% | 0.897/98.3% | 0.816/99.1% | 0.935/89.4% | For models in the sizes of 50 billion parameters such as Llama 70B, running the base model would usually require 8 devices with tensor parallelism (TP). As a result, using ALSpec would require at least 16 devices. To show ALSpec at the point where TP diminishes return (so that ALSpec provides real benefits for 70B model) would require even more devices. The resources required for these exepriments are beyond our current experiment setup. However, with the correctness analysis and our implemented kernels and methods, we believe that our approach is scalable to larger models when computing resources are available. # Additional Details in Experimental Setup for Kernel Fusion and SGDC We thank the reviewer for pointing this out, and we will add more information in the camera-ready version, particularly in Appendix D and E, where we talk about the kernel fusion and SGDC in detail. For the reviewer's concern about fused vs unfused attention kernel, the speed up from the fused kernel is simply the difference of running the flash decode on the entire context length vs. running flash decode on entire context length + first and last chunk context. This is because the fused kernel changes the order of computation, allowing us to obtain the intermediate result of first and last chunk for free. # Impact of Batch Size We focused on batch 1 in this paper, as usually during long context decode, the batch sizes are very small due to limited on-chip memory for the large KV cache. We will add more discussion on how batch sizes are handled in the camera ready version. In short, we have experimented with per batch threshold, where we only accept the speculation if all batches pass the lambda test. Alternatively, we also experimented with treating all batches as a single tensor and perform a single lambda test. The trade-off between these two methods are not within the scope of this paper, as the existing contents are already hard to fit within 8 pages. We plan to deep dive into this area in future work. # Why TP Fails to Scale As the reviewer points out, communication overhead is one reason of TP's diminishing scalability. Another reason is kernels such as GEMM have a constant cost regardless of the shape. When a model's parameters are sharded across many devices, the constant part of the kernel latency dominates, and as a result, the overall scalability diminishes. # Comparisons to DP Our proposed method is to optimize for latency. The problem with DP, as depicted in Figure 1, is that despite it gives good scalability (always close to 2x throughput with 2x more devices), it only improves throughput. If the objective is to optimize throughput only, then DP is always a better choice than ALSpec and full TP. On the other hand, the latency per user is always lower for DP than ALSpec and full TP. Therefore, we believe that a fair comparison should be done between AlSpec and full TP rather than DP. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have read the author response to my review and updated my review.
Summary: The paper "Attention-Level Speculation" introduces ALSpec, a novel parallelism paradigm designed to accelerate transformer-based LLM inference by overlapping self-attention computations with subsequent non-attention operations (e.g., feed-forward layers). Key contributions include the core idea of self-attention outputs using approximations (e.g., attention sink on first/last tokens) and verifies them in parallel. If the approximation error is within a threshold (controlled by hyperparameter $λ$), subsequent operations proceed early; otherwise, they fall back to exact attention results. Achieves 1.65× end-to-end decode latency reduction at 128K context length with 87.5% speculation hit rate, outperforming tensor/data parallelism scaling limits. Maintains baseline correctness on reasoning (GSM8K), math (MATH), and retrieval tasks by dynamically rejecting harmful approximations. Reduces attention latency overhead by 5× via overlapping computations, validated on Tenstorrent NPUs. Key Insight involve static approximations (e.g., fixed sparse attention) fail on tasks requiring global context (e.g., topic shifts in multi-step reasoning). ALSpec adapts layer- and token-specifically, accepting approximations in 50–90% of layers without quality loss.Combines with tensor parallelism, showing continued scaling where pure tensor parallelism plateaus (e.g., 8 devices). Claims And Evidence: 1. The 5× attention latency reduction and 1.65× end-to-end decode latency improvement at 128K context length are validated through empirical benchmarks on Tenstorrent N150 chips (Tables 1–2, Figure 8). The scalability analysis demonstrates diminishing returns for pure tensor parallelism, while ALSpec + tensor parallelism continues scaling. 2. Dynamic verification (via L2-norm thresholds) maintains baseline accuracy on tasks like GSM8K, MATH, and MMLU PRO (Figure 2). Static approximations (e.g., fixed sparse attention) fail on reasoning tasks, while ALSpec selectively accepts approximations with a hit rate of 50–90% per layer. Problematic Claims: 1. The reported latency gains are tied to Tenstorrent’s NPU architecture and proprietary kernels (e.g., speculative flash decode). Without ablation studies on GPUs/TPUs or open-sourced kernels, reproducibility is unclear. Host dispatch overheads (e.g., CPU-to-NPU communication) are excluded from latency measurements, potentially inflating real-world gains. 2. The Lipschitz continuity analysis (Appendix A) assumes independent, zero-mean speculation errors. Real-world error propagation may violate these assumptions, risking unbounded deviations in practice. The L2 verification threshold (λ) is empirically set without theoretical justification for its sufficiency across layers/task. 3. While ALSpec preserves accuracy on retrieval and math tasks, its performance on compositional reasoning (e.g., multi-hop QA) or low-resource languages is untested. The "needles in a haystack" experiment (Figure 3) uses synthetic data, which may not reflect real-world long-context retrieval challenges Methods And Evaluation Criteria: The methods and evaluation criteria demonstrate technical validity but exhibit notable limitations in scope and generalizability: 1. The L2-norm threshold verification ($ \| \tilde{A}_i - A_i \|_2 < \lambda \| A_i \|_2 $) is empirically effective but lacks theoretical justification. While Lipschitz continuity analysis provides error bounds (Equation 1), it assumes: 1.1 Independent, zero-mean speculation errors 1.2 Constant layer-wise Lipschitz factors ($ \alpha, \beta $). These assumptions may not hold in practice, risking unbounded error propagation. 2. Integrates attention sink approximation (first/last $ S $ tokens) with exact attention in a fused kernel, reducing overhead by 5×. However: 2.1Chunk size $ S $ is fixed (128–512) rather than adaptive to input. 2.2 Prioritizing first/last KV cache chunks biases toward positional extremes, potentially harming mid-context retrieval. 3. Maintains static execution graphs while dynamically routing computations. While effective on Tenstorrent NPUs, host-device dispatch overhead (CPU-NPU communication) is excluded from latency metrics, inflating real-world gains. 4: 4.1. Results are confined to Tenstorrent N150 NPUs. Tensor parallelism’s diminishing returns (Figure 8) may differ on GPU/TPU architectures due to distinct communication patterns. 4.2. Task Coverage: 4.2.1 No evaluation on low-resource languages (e.g., Swahili, Bengali) despite claimed multilingual support 4.2.2 Absence of compositional reasoning benchmarks (e.g., DROP, HotpotQA). Theoretical Claims: The authors provides directional guidance but lacks robustness guarantees for real-world deployments. Independent empirical validation of Lipschitz constants and error distributions is needed to trust the bounds. 1. Lipschitz Continuity Error Bound : The derivation in Appendix A establishes an upper bound on output deviation: $$ \epsilon \leq \sum_{i=1}^N (1 + \alpha)^{N-i+1} (1 + f(R)\beta)^{N-i} \delta_i $$ where $$\delta_i = \|\tilde{A}_i - A_i\|_2$$. I find few issues with this: 1.1. This assumes identical $\alpha$ (feed-forward/LayerNorm Lipschitz) and $\beta$ (attention Lipschitz) across all layers. Real transformer layers exhibit heterogeneous operations (e.g., early vs. late layers), violating this assumption. 1.2. For $N=32$ layers, coefficients grow as $(1 + \alpha)^{32}$, making the bound practically vacuous unless $\alpha \ll 1$. The paper notes this but provides no empirical validation of $\alpha < 0.01$. 1.3. The high-probability bound assumes speculation errors are independent and zero-mean. Real approximation errors (e.g., attention sink) exhibit structured biases (e.g., positional bias toward first/last tokens), violating independence. 2. The L2-norm threshold ($$\lambda$$) is empirically set without theoretical justification. The paper claims: $$ \|\tilde{A}_i - A_i\|_2 < \lambda \|A_i\|_2 \implies \text{safe approximation} $$ Issues: 1. 1. Layer-Wise Thresholding: Ignores error accumulation across layers. A per-layer threshold $\lambda = 0.1$ could allow $\epsilon = O(N\lambda)$ deviation, violating final output fidelity. 2.2. $\|A_i\|_2$ varies significantly with context length and input entropy, making fixed $\lambda$ suboptimal. 3. The 87.5% speculation hit rate (Table 1) suggests the bounds are overly pessimistic, but no ablation study isolates the impact of violating assumptions like uniform $$\alpha/\beta$$. The analysis doesn’t address the attention sink’s inherent positional bias – a systematic error source excluded from the zero-mean error assumption. Experimental Designs Or Analyses: The paper's experimental design demonstrates technical rigor but has critical limitations in scope and generalizability: 1. Task Coverage in Correctness Evaluation 1.1 Benchmarks exclude multi-hop QA (DROP, HotpotQA) and low-resource languages. 1.2 "Needles in haystack" uses artificial key insertion (Fig 3), failing to capture real-world long-context QA patterns. 1.3 Compares only against attention sink, omitting dynamic variants of LSH (Reformer) or sliding windows (Longformer). 2. 2.1 Attention sink prioritizes first/last tokens, violating the paper's assumption of zero-mean independent errors (Appendix A). 2.2 Lipschitz constants (α, β) are assumed uniform across layers, ignoring layer-specific dynamics (early vs late layers). 2.3 Fixed L2 threshold (λ=0.1) lacks theoretical justification for error propagation across layers. 2.4 Exponential error bound $$ \epsilon \leq \sum_{i=1}^N (1+\alpha)^{N-i+1}\delta_i $$ becomes vacuous for N=32 unless α ≪ 1 (unvalidated empirically). Supplementary Material: The authors did not provide any supplementary materials. Relation To Broader Scientific Literature: ALSpec bridges cognitive theories (attentional resource allocation), neural evidence (predictive coding), and ML systems (speculative execution) to address transformer inference bottlenecks. Its dynamic, hardware-aware approach advances beyond static approximations and token-level speculation, offering a generalizable framework for adaptive computation in LLMs. Essential References Not Discussed: The paper overlooks several critical areas of related work that contextualize its contributions: 1. The paper cites Elhoushi et al. (2024) for layer pruning but misses recent advances in self-speculative decoding (Elhoushi §4.2) and adaptive computation time (ACT) transformers. For example: 1.1 SPEED (Hooper et al., 2024): Overlaps layer computations across devices via pipelined speculation, achieving 1.8× speedups on GPUs without attention approximation. 1.2 LayerLoop (Eyuboglu et al., 2024): Reuses layer outputs for computational savings, relevant to ALSpec's focus on overlapping FF/attention. 2.While ALSpec uses attention sink, it omits comparison to: 2.1 Blockwise Parallel Transformers (BPT) (Google, 2023): Dynamically adjusts sparse attention blocks using gradient-based importance scores. 2.2 FLASH (H2O, 2023): Hybrid sparse-dense attention with runtime pattern selection, achieving 2.1× speedups on 32K contexts. 3. The Lipschitz analysis lacks connection to: 3.1 Kim et al. (2021) Proved self-attention Lipschitz constants are unbounded without layer normalization, contradicting ALSpec's assumption of uniform $\alpha$. [1] 3.2 Zhu et al. (2024): Proves high probability excess risk bounds of $O(1/n^2)$ via algorithmic stability under strong convexity/smoothness. [2] 3.3 Lei et al. (2023): Analyzes gradient stability for stochastic optimization but focuses on generalization bounds rather than error propagation rates. [3] 4. ALSpec's Tenstorrent NPU focus ignores: 4.1 FlashDecoding++ (NVIDIA, 2023): Achieves 4.2× speedup over vanilla FlashAttention on 128K contexts via asynchronous state management. 4.2 vLLM (Berkeley, 2023): Paged attention for dynamic KV cache management, critical for real-world long-context deployments. 5. No discussion of subquadratic attention methods that reduce compute without approximation: 5.1 Hyena (Poli et al., 2023): Replaces attention with implicitly parameterized convolutions. 5.2 Mamba (Gu & Dao, 2023): Selective state-space models for linear-time sequence modeling. These omissions weaken ALSpec's claims of novelty in: - Dynamic execution (overshadowed by SPEED/LayerLoop) - Error analysis (lacks modern Lipschitz bounds) - Hardware generality (no GPU/TPU benchmarks vs FlashDecoding++/vLLM) Including these would better position ALSpec within the broader landscape of efficient transformer inference. [1] Kim, Hyunjik, George Papamakarios, and Andriy Mnih. "The lipschitz constant of self-attention." International Conference on Machine Learning. PMLR, 2021. [2] Zhu, Bowei, Shaojie Li, and Yong Liu. "Stability and Sharper Risk Bounds with Convergence Rate $ O (1/n^ 2) $." arXiv preprint arXiv:2410.09766 (2024). [3] Lei, Yunwen. "Stability and generalization of stochastic optimization with nonconvex and nonsmooth problems." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023. Other Strengths And Weaknesses: The paper "Attention-Level Speculation" demonstrates notable strengths in technical innovation and practical implementation but has limitations in theoretical robustness and generalizability. 1. Strengths: 1.1 Combines speculative execution (from CPU architecture) with transformer inference, extending token-level speculation (Leviathan et al., 2023) to attention layers. This hybrid approach is novel compared to static approximations like Reformer or Longformer. 1.2 Integrates attention sink (Xiao et al., 2024) with Tenstorrent NPU optimizations (speculative flash decode kernel, SGDC), advancing beyond GPU-centric methods like FlashAttention. 1.3 Demonstrates that deeper layers tolerate more approximation (Fig 4), aligning with pruning literature (Elhoushi et al., 2024) but adding dynamic verification. 2. Significance 2.1 Achieves 1.65× speedup at 128K context (Fig 8), addressing the critical bottleneck of long-context LLM inference. 2.2 Combines with tensor parallelism, circumventing its diminishing returns (e.g., 8 devices yield 60 tokens/s vs. 40 tokens/s for pure tensor parallelism at 128K context). 2.3 Validated on real hardware (Tenstorrent N150) with mixed-precision support, showing feasibility for deployment. 3 Theoretical Gaps: 3.1 The error bound $\epsilon \leq \sum_{i=1}^N (1+\alpha)^{N-i+1} \delta_i$ assumes uniform $$ \alpha, \beta $$ across layers, contradicting evidence of layer-wise dynamics (Kim et al., 2021). 3.2 Empirically set to 0.1 without theoretical justification for sufficiency across tasks/layers. 4. Fails to compare with recent dynamic methods like SPEED (Hooper et al., 2024) or Hyena (Poli et al., 2023), which offer alternative efficiency gains. Provides a template for op-level speculation beyond attention (e.g., FFN layers), though this is not explored. Other Comments Or Suggestions: 1. Page 3, Fig 1: Define "ccl" (collective communication ops) in the caption. 2. Page 5, Algorithm 1: Clarify "ops before/after self attn" (e.g., LayerNorm, residual adds). 3. Appendix A: Add intermediate steps between Equations 2 and 3 for readability. Questions For Authors: 1.Can ALSpec reproduce the reported latency gains (1.65× at 128K context) on GPUs/TPUs, particularly compared to FlashDecoding++ (NVIDIA) or vLLM? If not, what architectural features of Tenstorrent NPUs (e.g., NOC design, fused kernels) are indispensable for ALSpec’s gains? 2. How do you empirically validate the assumption of uniform Lipschitz constants (α, β) across layers? Does layer-wise measurement of α (e.g., via power iteration) reveal significant variance, and if so, how does this affect the error bound in Eq. 1? 3. Does ALSpec maintain correctness on low-resource languages (e.g., Swahili) or multi-hop QA (HotpotQA), where static approximations fail? If untested, could positional bias in attention sink harm non-English token distributions? 4. What is the CPU↔NPU communication overhead for priority tensor synchronization in SGDC? Does excluding this from latency metrics inflate real-world gains (e.g., 5× attention reduction)? 5. How does ALSpec compare to token-level pipelined speculation (Hooper et al., 2024) in terms of latency reduction per additional device? Does ALSpec outperform SPEED’s 1.8× GPU speedups when both use 8 devices? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your detailed feedback which will greatly improve the quality of the work. ## Absence of compositional reasoning and low-res. languages We extend our eval to GSM8K in Swahili (low-res. language), HotpotQA, and RepoBench-P. Results are below. | Config\Tasks | MGSM_CoT_Sw | HotpotQA | RepoBench-P | |---|---|---|---| | no_spec | 0.58 | 0.92 | 0.756 | | l@0.05 | 0.59/54% | 0.92/19% | 0.74/33% | | l@0.10 | 0.58/78% | 0.92/43% | 0.77/60% | | l@0.15 | 0.59/86% | 0.92/57% | 0.81/76% | | l@0.20 | 0.56/95% | 0.91/72% | 0.76/85% | | l@0.25 | 0.52/97% | 0.91/80% | 0.74/92% | These new results show that ALSpec maintains correctness on low-res. languages (e.g., Swahili) or long context multi-hop QA (HotpotQA) with high speculation hit rate. ALSpec's attn sink applies positional bias to recent tokens to approx. full attn. The 128 window size should capture key contextual ideas regardless of language, while early tokens address the softmax off-by-one bias [Evan Miller, July 2023], not positional preference. ## GPU/TPU generalization We believe that ALSpec can reproduce latency improvements on GPUs/TPUs. We confirmed this with new experiments, serving 8B models on 4 vs. 8 H100s with TP using the SGLang framework and FlashInfer attn backend. Although we don't (yet) have an ALSpec implementation on GPU, this provides an estimation of the perf gain. The results are below: | Context Len | 4xH100 Attn Latency (us) | 4xH100 Non-Attn Latency (us) | 4xH100 TP Tok/s | 8xH100 TP Tok/s | TP Scaling | Proj. ALSpec @ 65% Hit Rate | ALSpec Scaling | |---|---|---|---|---|----|---|---| |32k| 29 | 100 | 214.2 | 231.3 | 8.0% | 246.0 | 14.8% | |64k| 49 | 100 | 191.4 | 209.7 | 9.6% | 237.8 | 24.2% | |128k| 63 | 101 | 169.3 | 184.8 | 9.2% | 217.5 | 28.5% | Our new estimation shows that ALSpec cuts latency on Llama 8B by 1.28x vs. 1.09x for full TP at context len 128K. In this case, the attn latency is only 63% of non-attn latency--- meaning more gains for context len beyond 128K. Although ALSpec introduces a new attn kernel and SGDC, it does not fundamentally change the Op by Op and static graph execution style on modern platforms. Therefore, we believe ALSpec is implementable on GPUs/TPUs by expert kernel writers. ## Lipschitz continuity analysis assumptions Hua et al. (2023) can be a game changer to our induction method, and we would be grateful if LXts could point us to the exact paper title, as we weren't able to find it. The $\alpha$ and $\beta$ are not assumed to be a universal bound---they are some upper bounds of all Lipschitz constants across all layers and used to simplify the expression. Regarding mean-0 approximation error, we have the approximation to be $ \tilde{A} = QK^T(:,B) $ where $B$ is a subset of columns of $K^T$ and $ \tilde{H} = V(B, :)\text{softmax}(\tilde{A}). $ Under some condition of distribution of cols of $V$ and rows of $K^T$, we might be able to show $ \mathbb{E}_{\pi_V, \pi_A}\left[\tilde{H}\right] = H $ Full independence is not required for Azuma-style concentration result, but we agree that we are yet to show that conditional expectation holds in martingale conditions. The boundedness of the error is a direct consequence of the algorithm; we abort the thread if the error exceeds the threshold. ## Host-device communication overhead Modern computing frameworks capture (trace) operations that are going to be executed on device, ahead of time. This trace can then be executed on a device (e.g. GPU), without any interaction with the host. Since all ops in ALSpec run fully on device, we execute it using trace. Hence, we analyze only the device duration to present findings that are agnostic to the host, while maintaining realism and feasibility. ## SGDC Sync Overhead The SGDC mechanism ensures that each device can compare its priority against its pair, thereby determining its role in SFD (ie. sender/receiver). Thus, the sync is an all-gather on 2 devices, where each device collects priority from its pair. GPUs/NPUs/TPUs support direct p2p communication via a topology (e.g. mesh), where collective ops (e.g. all-gather) can be implemented without host overhead. Considering (i) the small size of the priority tensor and (ii) that the cost of syncing is constant as seq len scales, we conclude that SGDC sync is a low-cost mechanism that does not inflate real-world gains from SFD. ## Token-level pipelined speculation A key advantage of ALSpec is that it leaves the model unchanged (no fine tuning). SPEED require retraining for weight sharing. ALSpec addresses long context len issue through attn overlap. SPEED's 1.8x latency reduction is at small context len while ALSpec's 1.65x latency reduction is at context len 128K. SPEED and ALSpec are optimized for different scenarios and could complement each other when used together. ## Additional literature Thank you for pointing us to these related work. We will cite, compare, and contrast ALSpec with them in our final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing detailed analysis and answering my questions. I wanted to followup on the responses before considering raising my scores. # Absence of compositional reasoning and low-res. languages The results looks commendable and I thank the authors for providing a comprehensive table with the results. I am curious why the metric for *RepoBench-P* oscillate from 0.81/76% to 0.76/85% and further less with increased IOU. doesn't make sense for this small change. I am also curious what did the authors mean when they mention **The 128 window size should capture key contextual ideas regardless of language, while early tokens address the softmax off-by-one bias [Evan Miller, July 2023], not positional preference.** What is [Evan Miller, July 2023] referring to? I didn't find any reference/paper on this regards. Kindly clarify. # Lipschitz continuity analysis assumptions The author's clarification that $\alpha$ and $\beta$ serve as upper bounds across all layers rather than universal constants makes sense. ```Under some condition of distribution of cols $V$ and $K^T$```, what are the conditions here? By my understanding, The attention sink pattern introduces systematic positional bias (prioritizing first/last tokens), which likely violates zero-mean error assumptions. While the authors ```abort the thread if the error exceeds the threshold```, the authors would benefit from analyzing how errors propagate before reaching the threshold, especially across consecutive speculation-accepting layers. --- Reply to Comment 1.1.1: Comment: # RepoBench-P Results Oscillation Thank you for your follow up on the additional questions. We have reran the experiments and confirmed that this is indeed the result. We don't believe that this is due to a bug in our code as our evaluation uses the LM-Eval-Harness framework on the `longbench_repobench-p` task, which is widely used in the community. We suspect that moderate speculation (e.g. lambda=0.05 to 0.15) may be helpful for the model as it removes non-important tokens in attention when the attention outputs do not diverge beyond the theshold. While this is only our intuition, we indeed see similar patterns in the main results such as GPQA and SWDE, where apply ALSpec with small lambda gives better correctness than baseline before degradation happens as lambda increases. This interesting phenomenom is one of our future direction in the follow up works. # Softmax Off-by-One Bias The [Evan Miller, July 2023] reference is this here (https://www.evanmiller.org/attention-is-off-by-one.html). The first K tokens in ALSpec's attention is used for addressing the commonly observed transformer attention behaviours where the first few tokens attributes to high attention scores, while the last k tokens are for capturing recent contextual idea, serving as windowed attention. The StreamingLLM paper also illustrate this idea in Fig. 2. # Lipschitz continuity analysis assumptions Thank you for pointing out the condition required for our proof. We can reduce this problem into something that depends on distribution $K$ and $V$. Under the same setting, we have the approximation to be $\tilde{A} = QK^T(:,B)$ and $\tilde{H} = V(B, :)\text{softmax}(\tilde{A})$. Let $C$ be the set of column indices of $K^T$, let $B$ be the subset we take, define $\kappa = \frac{\sum_{i\in C} e^{K_i}}{\sum_{i\in B} e^{K_i}} = \frac{\sum_{i\in C} S_i}{\sum_{i\in B} S_i}$. Then assume we can find a close form of $\mathbb{E}[\kappa]$. To show $\mathbb{E} \tilde{H} = H$ Where $ H = \sum_{i \in C} S_i V_i$ and $\tilde{H} = \sum_{i \in C\backslash B} \tilde{S}_i V_i $ $ = \sum_{i \in C\backslash B} $ $\kappa S_i V_i $ We hope to show $\mathbb{E}\tilde{H} - \mathbb{E} H = 0$, which is \begin{align*} \mathbb{E}\tilde{H} - \mathbb{E} H =& \sum_{i \in C\backslash B}\mathbb{E} [\kappa S_i V_i] - \sum_{i \in C} \mathbb{E}[S_i V_i] \\ =& \sum_{i \in C \backslash B} \mathbb{E} [\left(\kappa - 1 \right) S_iV_i ] - \sum_{i \in B} \mathbb{E} [ S_iV_i ] \end{align*} So we need to show \begin{align*} \sum_{i \in C \backslash B} \mathbb{E} [\left(\kappa - 1 \right) S_iV_i ] = \sum_{i \in B} \mathbb{E} [ S_iV_i ] \end{align*} We assumed this relation holds, and empirically our results on shallow (32 layers Llama 8B) and deep (80 layers Llama 70B) both demonstrated good results using the verification algorithm based on this formulation. To rigorously prove this equation holds, however, requires us to run more experiments to determine the distribution of $\kappa$, scores, and values. Overall, in this work, we have showed (1) attention level speculation as an example of op-level speculation, (2) a new execution paradigm SGDC for running op-level speculation within any graph, (3) some theoretical proofs and intuition of the algorithm, and (4) end-to-end implementation on N150 hardware with customized kernels. Therefore, we believe for this already content heavy work, it is best to leave the additional experiments as a future work. We believe that proving this assumption, or finding a weaker assumption that holds in ALSpec's case, will be an impactful work itself rather than an extended section for this work. # Closing Remark We thank the reviewer for the detailed, insightful comments and suggestions of additional literatures. They truly improved the quality of the work.
null
null
null
null
null
null
NestQuant: nested lattice quantization for matrix products and LLMs
Accept (poster)
Summary: This work presents a matrix multiplication replacement for low precision quantized neural networks. A new vector quantization mapping is explored to maximize the efficiency of low bit number distribution. The authors propose to use a new encoding scheme to achieve near lower-bound compression. Claims And Evidence: The claims made in the submission are supported by extensive existing literature. Methods And Evaluation Criteria: The authors evaluated their results on LLaMA3-8B, across several zero-shot reasoning tasks, accuracy, and a perplexity metric on the Wiki2 dataset. The major experiments are provided in Table 1, and more ablation studies are provided in Section 5. There are many other experiments that need to be performed to demonstrate this method's effectiveness. 1. Results on LLaMA2, including 7B/13B/70B where most existing literature has verified. 2. Results on LLaMA3-70B, where QuaRot, SpinQuant also have provided the results. In addition to models, the authors need to verify the actual latency on GPUs with dedicated kernels, Section C only provides simulated speedup, which is not enough. Moreover, for the weights only quantization, 2-bit quantization on all models mentioned above is needed. Theoretical Claims: This work does not introduce new theories. Experimental Designs Or Analyses: See above. Supplementary Material: No Relation To Broader Scientific Literature: This work is related to the model compresion literature. Essential References Not Discussed: None Other Strengths And Weaknesses: Weakness - SpinQuant is referred to as the uniform quantization baseline. However, SpinQuant does not use random Hadamard rotation, I believe the uniform quantization baseline is QuaRot. - Sec 4.3 overlaps QuaRot, and the authors do not mention their differences. - I am curious to see if the authors use E8 lattice in QUIP# and align all other setups like random rotation, how much performance improvement can we get? Other Comments Or Suggestions: The notation systems could be improved with a Table. Current notation involves multiple symbols yet lacks clear explanation. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comment and appreciate the provided feedback. Following the suggestions, we run the evaluations on a larger set of models and provide the perplexity results, as well as comparisons with other methods. The results are outlined in the table in the response to reviewer a2PJ. Our method demonstrates advantage in both weight-only quantization setup, and weight, KV cache, and activation quantization setup. In fact, the models where weights and KV cache were quantized with NestQuant are competitive even with weight-only quantization benchmarks. For speedup measurements, we introduce a relaxed version of the algorithm, called NestQuantM. In this algorithm, the E8 oracle in dequantization prodcedure fixes the parity of coordinate sum by adjusting the first coordinate, as opposed to picking the best coordinate using argmax/argmin. Such a simplification preserves the correctness of the encoding-decoding procedure (with a slight distortion of the shaping region), but significantly improves the efficiency of the code on GPU. We tested NestQuantM on several models and did not observe significant deterioration in quantization quality, as shown in the ppl results table. We have implemented a CUDA kernel for NestQuantM. When running encode_e8 function, the kernel stores a scaled 8-dimensional vector as a collection of 8 int8 entries, packed into two 32-bit integers, and performs mass operations using regular integer operations and CUDA integer SIMD instructions. Please check the detailed description and benchmarking results in the response to reviewer r651. We provide the comparisons with QuIP# in the response to reviewer a2Pj. We note that NestQuant outperforms or matches QuIP# performance on all models. In addition, QuIP# uses an inter-layer finetuning algorithm, while NestQuant assumes no access to the data except for Hessian calibration data. The focus of this work is on quantization of W+KV or W+KV+A . These setups are considerably different than W-only quantization (for which QuIP# was designed), because KV and A quantization is *done in runtime* and the encoding algorithm must therefore be fast. For W-only quantization, encoding is done offline, and may be slow. Having said that, our experiments show that NestQuant attains SOTA ppl even for W-only quantization at R=4 bits. Going to the extreme low-rate regime, e.g., R=2 bits, requires further ideas from O&P24 not discussed in the present manuscript (MMSE scaling + sparsification + dithering). We will implement them, but it takes more time, and the experiments may not converge in the rebuttal time-frame. We also respond to the remaining comments: - We treat both QuaRot and SpinQuant (as well as other quantization schemes) as our benchmarks, and show improvement with respect to all previous quantization schemes. In Figure 3, we show results for synthetic data consisting of Gaussian iid matrices. Since the Gaussian iid distribution is isotropic, any rotation applied on it will not change its distribution, and SpinQuant and QuaRot are identical in this context, because (while they apply different rotations) they both use uniform quantizers. - We did mention in Section 2.2 that QuaRot already applied Hadamard transform, but the reviewer is right and we should have also mentioned this in Sec 4.3. We apologize for the omission. In a revision Sec. 4.3 will be sure to reference prior literature on rotations (SliceGPT, QuIP, QuaRot and SpinQuant)” - As mentioned above, QuIP# was designed as a W-only quantization method. Consequently, its dequantization algorithm was designed to run fast, but not its quantization algorithm. Our experiments show (small) improvement over QuIP# even in the W-only quantization regime, which is not the focus of our work. Conducting experiments for W+KV/W+KV+A quantization with our Alg 1 + Alg 2 + overload avoidance mechanism replaced with the QuIP# E8P quantization/dequantization from Sections 4.2 & 4.3 in QuIP# paper will take a long time, because the QuIP# E8P quantization algorithm is not fast enough. Given the fact that NestQuant attains better ppl for W-only quantization, we expect the same trend also for W+KV and W+KV+A. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their rebuttal, I have checked the authors' reply and review comments from other reviewers. I am also very familiar with SOTA and understand LLMs quantization topic. First of all, I do not appreciate how this paper is drafted, mainly because of its chaotic notation system. The matrix/sets are both represented by capital letters, sometimes, a set is represented with calligraphic letters. Moreover, many notations used in former texts are not used later. I understand that everyone has their preference of a manuscript and some may like this style (and I respect that). But personally speaking I do not like how the problem and solution are presented in this paper. Additionally, I want to ask a few more questions regarding the rebuttal before I make the final decision. Q1. Throughout this paper, including sec 2.1, figure 2, the authors talk about how to quantize weights and activations simultaneously using NestQuant. However, the experiments do not show W4A4 (or W3A3) results. Why spend so much space for W and W+KV experiments? Q2. Following Q1, I am confused about how NestQuant can quantize activations with practical benefits on hardware. It seems like activations require decoding before the matmul operations, which cannot remain in floating-point precision (correct me if I am wrong). Then, in this case, the major motivation is to find a fast decoding and accurate algorithm. Q3. Regarding the cuda kernel, the authors use GEMV, which is for decoding. However, based on the QuaRot paper, " *As the prefill stage is known to be compute-bound [Ashkboos et al., 2023], joint quantization aims to reduce the precision of parameters and KV cache (which results in lower memory usage) as well as inputs (known as activations) and compute the forward pass in low precision* ", the purpose of weight activation quantization benefit should be on prefill stage. It seems NestQuant cannot do that. Q4. Following Q3, the authors mentioned, " *Our CUDA Kernel does GEMV (matrix-vector) multiplication, assuming the matrix is quantized with NestQuantM and the vector is unquantized* ", Does that mean quantizing the activation (vector) is not necessary since you need to decode it into floating-point in decoding? By the way, regarding the concerns on comparing QuIP#, thank you for your clarification. I see now that the encoding speed is the main difference. However, I think the motivation of this paper is misleading. The current motivation makes the reader think you can accelerate matmul rather than encoding the dynamic activation vectors. Finally, thanks again for providing results on other models, they are helpful to understand the performance of NestQuant. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s comment on our response. We believe that it would be worthwhile to re-emphasise the motivations and the practical implications of NestQuant. These motivations are outlined in the beginning of the Introduction section of the original paper. Our view is that the primary aim of quantization is to reduce the memory consumption enabling usage of cheaper GPUs for inference (less VRAM), improve storage efficiency of cold sessions (KV cache) and allowing bigger batches with QLORA (quantized forward pass activations) or pipelined parallelism in distributed inference. The secondary aim is accelerating inference by saving HBM load times in the generation phase. Since during the latter the matrix-vector is the dominant operation, we implemented the GEMV kernel (see response to reviewer r651) to demonstrate that our dequant function has reasonable latency as well (i.e. 4.25-bit NestQuant is slower than 2-bit QuIP# but faster than 4-bit QuIP#). We want to stress that our work clearly shows that NestQuant yields better perplexity for a given budget of RAM, while inference speed will depend on the particular HBM speed and other hardware details. We do hope, though, that our demonstrations of strong perplexity gains will encourage work on hardware-assisted quant/dequant in future TPUs and GPUs. We also wanted to mention that as you requested, we obtained the Llama-3-70B quantization result at W4A4KV4 ppl=3.70 (next best of all is OstQuant at 4.01). Q1. The importance of W-only and W+KV experiments is mainly on saving VRAM. However, we are also accelerating the inference – see the table we posted before, our kernel runs at int8-speed while taking int4-memory and SOTA perplexity. We didn’t present W4A4 results because W4A4KV4 already outperforms SOTA on W4A4 for most models. However, we have ran experiments and show W4A4 results and preliminary results on W3A3. | | Llama-3-8B | Llama-2-7B | |-------|------------|------------| | W4A4 | 6.56 | 5.64 | | W3A3 | 8.25 | 6.33 | Q2. Quantizing activations is useful for QLORA, for future guidance of hardware-assisted low-precision storage (energy efficiency) and for pipelined parallelism. We reiterate that our work demonstrates clear advantages (in perplexity) for using NQ for both Weights and W+KV+A quantization. Note that other vector quantization algorithms (AQLM, QuIP#, QTIP) have quantization functions that are too slow for runtime implementation, thus those algorithms are only used as W-only methods (though we are competitive even in this mode). Q3. Using GEMV, we demonstrate that loading (smaller in size) quantized matrix and performing dequantization is faster than loading unquantized matrix. This should offer advantage of W-only quantization in both prefill and generation, although we expect the relative speedup in generation to be larger. We also note that due to properties of E8, dequantization results in scaled int8 vectors, which allows us to use int8 multipliers for matmul. This can be seen by the reviewer in our GEMV code, which uses __dp4a() CUDA calls (!). BTW, there was a slight mistake in our CUDA kernel, updated version runs slightly slower at 58us, code: https://pastebin.com/7G9EXgTc Q4. The CUDA kernel is for the simplest version of NestQuant quantized matrices (W or KV, for example) and plain int8 activations (e.g. queries). This would correspond to W4KV4A8 case (which we did not test). For the reported results of W4KV4A4, the activations themselves are stored via NestQuant in 4 bits/entry. In this case the CUDA kernel requires another call to decode_nestquant() to unpack NQ-stored activations into int8-activations. We apologize for being not clear enough in response to reviewer r651, “unquantized” here means that we didn’t use NestQuant for quantization of activation vector, but we still store it in 8 bits with uniform quantization. We use same setup when assessing QuIP#’s speed.
Summary: The paper contributes a novel practical vector quantization method and applies it to quantize LLMs (post training). It is based on information theory with the key elements being: - Hadamard transforms to bring the distribution of vectors (weights/activations) closer to normality - Vector quantization of groups of 8 elements using the Gosset lattice. More precisely, a multi-scale Gasset lattice, where the set of scales is optimally estimated on a small calibration sample. The method has been shown to achieve better LLM performance when quantizing weights and keys / values to 4 bits, compared to current SOTA, and the analysis suggests its practicality allowing also a speed-up besides memory savings. ## update after rebuttal I had several concerns on the clairity, technical details and the how the reasoning is specific to preserving matrix products. The concerns have been resolved by the rebuttal, they can be addressed with a minor revision. From the experimental side, from my point of view, the paper presents enough evidence for improving SOTA quantization quality of large models and reasonable evidence of potentially low computation costs. It should certainly be accepted. Claims And Evidence: I have not noticed any over-claiming or inconsistency in the claims and the experiments. Methods And Evaluation Criteria: Yes Theoretical Claims: The paper devotes significant space to review prior work and background. It is not quite clear to me which parts of section 3 present original ideas. Mainly it is probably adaptation of results from information theory. I have some concerns about clarity, but they are also connected with the theoretical claims made. First, I find the paper lacks clarity in defining the following: 1) Lattice. It is specified rather late in 308 and indirectly. 2) nested lattice (139, 176) / nested lattice quantization. Should be important, given the title, right? Is nested lattice a lattice? 3) Gosset lattice. It is specified in 324, but I do not see how is it a subset of standard integer lattice as claimed in 265-267. It is also not obvious that $D \cup D + \frac{1}{2}$ is a lattice. 110: I am confused by the claim that $UX$ is i.i.d. Gaussian, i.e. $UX \sim N(0,I)$, because this necessarily implies that $X\sim N(0, I)$ as well. Eq. (1): I believe the authors mean the hats on X and Y separately. **Q1** What part of the method makes it specific to approximating the product of matrices rather that optimizing the expected distortion of X and Y separately? It seems to me that all the arguments and constructions in Sections 3/4 (in particular, granular and overload errors) are motivated by the Normality of inputs and the expected distortion. The method for finding optimal scales also seems to be optimizing the expected distortion. Related to the above, How does Eq. (1-2) compare with a rate that one gets for $E(X' Y - \hat X' \hat Y)^2$ when quantizing $X$ and $Y$ optimally to minimize the distortions $E (X - \hat X)^2$, $E (Y - \hat Y)^2$? Doesn't it give $2*2^{-2 R}$, which is kind of very similar? I assume we are in the range $R > 1$... 316: It is not clear whether $x$ is assumed in $\Lambda$ or in $C$ at this point. The claim that $Q$ is a bijection between C and $\mathbb{Z}_q^d$ for any generator perhaps needs a justification or reference. **Q2** I do not see how the algorithm implements the shaping according to the Voronoi region of $q\Lambda$. The Encode procedure in Alg. 1 already assumes $x$ in the Voronoi cell of $q \Lambda$ but in Alg. 3 this is not ensured. The encoder using $v {\rm mod} q$ can of course encode any lattice point (then the input constraint of Alg.1 needs to be lifted). The question is whether the decoder will decode it with zero error iff $x \in V_{q \Lambda}(0)$? I guess yes, but would be nice to clarify in the paper. Experimental Designs Or Analyses: I find the experiments to be sound, comparing the perplexity at the same bits per entry for all methods in Table 1 and in Fig 1 for several rates. Llama results for different rates in Tables 2,3 should provide a broader basis for potential applications and further methods. Validation in Fig 5 takes into account both the perplexity and the change in the code length due to different k. Additional experiments on a synthetic problem (Fig 3), and ablations in Tables 5,6 are also very useful. Perhaps it would be instructive to see what happens if orthogonal transforms are omitted. Supplementary Material: Yes, A-D, but did not follow all the details. Relation To Broader Scientific Literature: I am no expert, but what I see in the paper, it mentions many works in various contexts (e.g. locality-sensitive hashing). The paper advocates quantization for approximating inner products and cites recent theoretical results in 124. As discussed above, it is not entirely clear whether this is actually used towards the design of the scheme. I do not see the paper to be contributing substantially in methods, so I would not expect it to impact in the broader scope. Essential References Not Discussed: **Q3** I think I would like to see more clearly what constructs in the paper are claimed innovative / original. In particular, a more detailed discussion of what is different in the methods from Tseng et al. 2024. They seem to be using Gosset lattice E8 and Hadamard transforms as well. Why exactly their method is deemed impractical by the paper? Also, what is the attribution of the encoding-decoding algorithms 1,2 for the lattice restricted to a cell of $q \Lambda$? Other Strengths And Weaknesses: **Q4** In Fig. 4. Why there is no quantization of queries (top path after orange Hadamard). Is it intentional, like in this is in-chip and not a computation bottleneck? Then I wonder why quantizing the values (bottom path) is without a Hadamard before it? Doesn't it degrade the quantization accuracy? In fact, from this diagram, I don't know why the whole bottom path till the pooling with attention weights cannot be just identity, because I think a linear transform, which is the same for all tokens, can be as well applied after the pooling. The notation of the plot is confusing. I assume a group like $H W_k$ means $H$ applied to weights and then quantized (because of the Q supperscript). Also, there are many different colors in the Figure. I they have a purpose, it would be appropriate to explain in the caption. For instance, it would be nice if paired (same) Hadamard transforms were of the same color, but this seems not to be the case for the first on the left and those applied to $W_q$, $W_k$, $W_v$. **Q5** Can quantizing activations with a Hadamard transform actually achieve a memory saving, i.e. can the Hadamard transform be applied on the fly, without writing the activations to memory before quantizing them? If not, then there is a write and read of them involved, and they need to be decoded before used in the linear layer. It seems to me there is no advantage in memory use or speed compared to leaving activations as int8. Other Comments Or Suggestions: # Minor Issues 035R: Would be useful to elaborate on what is KV cache, and why it is a bottleneck, regarding the memory. 085R: It is not clear what is random for the expectation. 173: How finding the nearest lattice point is an assumption? 220: I find it hard to follow. Choosing $S \in R^n$ and independently choosing $\beta$ to scale $S$ seems redundant. 230: What is the difference between ${\rm covol}(\Lambda)$ and ${\rm vol}(\mathcal{V}_\Lambda)$ in (3)? 236: $vol(B)$? 245: One confusion I have is that lattice quantization is still uniform over each of its basis directions. The claim was rather obscure. It would help to detail the example in Fig 2: show the Generator, is this lattice a subset of integer grid if scaled? I guess not? I've noticed Gosset lattice has symmetries with respect to 4x4 Hadamard transform. Then perhaps the rotation can be simplified a bit building from $I_4$ blocks? I am not sure this makes sense, but though I would mention it anyway. Questions For Authors: I marked the most important questions as Q1-5 above. In summary, I am concerned with that not everything is presented clearly enough and possibly somewhat misleading. I am open to rising the score if these questions are clarified / rebutted. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for such detailed reading and thoughtful comments! Please refer to other responses for new experiments. Lattice defs clarity: We will provide references and a short definition of lattices and nested lattices early 110: The exact statement is only given later in Sec 3-Random Rotation. Eq. 1: $\widehat{X^T Y}$ is our generic notation for the inner-product $X^TY$ from the quantization bits. ### Q1 The fact that we can design quantizers for Gaussian matrices, and those work universally for any two matrices $X$, $Y$ is a special feature of the MatMul problem. The reason is that we can apply a rotation $U$ on both $X$ and $Y$, and quantize $UX$ and $UY$ (instead of $X$ and $Y$), without affecting the matrix product. The rotation makes $UX$ and $UY$ essentially Gaussian, so we only need to design quantizers for this distribution. Not any good vector quantizer (VQ) will work well for MatMul. In VQ only the trace of the quantization error covariance matrix matters. In MatMul quantization, its Frobenius norm is also important (O&P24). For “good” lattice quantizers in any dimension, this Frobenius norm is known to be small, which motivates the use of nested lattice quantizers. O&P24 show that for $R>0.906$ optimal nested *lattice* quantization of each column vector is indeed optimal for MatMul quantization. For smaller rates, further dimensionality reduction is needed. At this point we focused on just implementing the fast VQ, in the future other ideas from O&P24 will be added. For example, at R=1.2 bit/entry we found that in E8 NestQuant, dropping 10% of vector entries and at R=1 bit dropping 25% of vector entries works best. We also note that off-the-shelf lattice VQs are either highly suboptimal or too slow for our purposes, and our novel multi-scale trick solves this. 316: x is only assumed to be in $\Lambda$, and this will indeed be made clear. We will add a reference to C&S’83 for bijection. See also our answer to Q2. ### Q2 It's true that from $v$ mod $q$ the decoder can only know that the described point belongs to a particular coset of “coarse” lattice $q\Lambda$. However, only one point (our decoder output!) in this coset belongs to the Voronoi cell of $q\Lambda$, thus resolving ambiguity. This is described in C&S’83 and we will add a reference. ### Q3 As explained in Q2, Alg1 describes a coset of the fine lattice, and Alg2 chooses the unique member in the coset that belongs to the cell of $q\Lambda$. This simple enc/dec procedure is possible only due to the quotient group structure $\Lambda/q\Lambda$ of NestQuant. Shaping regions that are not cells of $q\Lambda$ do not lend themselves to these enc/dec algorithms. In our experiments we used $\Lambda=E_8$, as in QuIP#. Note however, that NestQuant enables efficient encoding/decoding of the $2^{dR}$ points from E8, and allows to use any R that is log2(int), which we see as a major advantage. QuIP# uses a different set of $2^{dR}$ points and encodes them via slow residual quantization and LUT. For 2,3,4-bits and W-only quantization (which is the goal in QuIP# and QTIP) this is totally fine, but for KV+A that are quantized in runtime, those solutions are too slow. The NestQuant framework is not restricted to E8. It can be implemented with any base lattice $\Lambda$. In O&P24 it is shown that for “good” high-dim lattices NestQuant is information-theoretically optimal for MatMul quantization. Practical considerations prohibit high-dim lattices, and instead we use low-dim lattices. In such dimensions, overload errors are unavoidable and deteriorate performance. Our novel overload avoidance mechanism (multi-scale) is crucial for attaining SOTA ppl. ### Q4 We don’t quantize queries for a fair comparison with other works (SpinQuant, QuaRot), which also keep queries in 16 bits when KV cache is quantized. In autoregressive generation the queries are used once, while keys and values are used by all subsequent tokens. We agree that applying head-wise Hadamard transformation on values would make them easier to quantize and are thankful for this suggestion. In the final revision, we will clarify Figure 4. The Q-box around $HW_K$ group indeed means that H is applied to the weight and this matrix product is quantized: The $W_oH$ part of the diagram should be corrected to $HW_o$. In this diagram, H matrices on key-query path are applied per-head (i.e. in small dimension), while all the other H are full embedding dimension sized. ### Q5 We agree that quantizing activations below int8 should be used for the scenario when they need to be sent over the network for very large models under pipelined setup. The answer to the question of whether Hadamard transform can be done on the fly depends on particular hardware and cache structure but we do mention that SpinQuant and others are already using this idea in practice. We believe that one activation vector should fit in the L1 cache of most GPU SMs, hence making fast Hadamard transform especially fast. --- Rebuttal Comment 1.1: Comment: I thank the authors for a detailed response, I think all clarity concerns have been addressed. I have checked also other reviews. I am very fully familiar with SOTA, but to me the experimental verification presented appears quite thorough and convincing. Also, I do not expect ICML papers to deliver optimized CUDA kernels — these aspects affect the practical impact of the work but not as much the research contribution. I have raised my score to strong accept.
Summary: The authors propose NestQuant, a novel lattice-based quantization method designed to improve weight-activation quantization for LLM. Claims And Evidence: The authors provide evidence, such as RMSE error comparisons with SpinQuant, to demonstrate the effectiveness of NestQuant relative to uniform-based methods. Methods And Evaluation Criteria: The evaluation benchmarks align well with those typically used for PTQ methods. Theoretical Claims: The theoretical section is clear. Experimental Designs Or Analyses: The primary experiments are conducted on Llama3-8B, with some extension to Llama3.2-1B. However, I am curious about the effectiveness of NestQuant on larger models, such as Llama3-70B, and other series like Qwen2.5-7B. Additionally, the current experiments do not include speedup measurements, which are crucial, especially considering the use of the new lattice-based quantization. A detailed discussion of these measurements is recommended. Supplementary Material: I have reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: There is a lack of comparison with other rotation-based PTQ methods, particularly the following: 1. DuQuant: Distributing outliers via dual transformation makes stronger quantized LLMs, NeurIPS 2024. 2. OstQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitting, ICLR 2025. Given the published date, I believe a comparison with DuQuant is essential, while the inclusion of OstQuant would also be valuable. Essential References Not Discussed: The paper discusses weight-activation quantization and utilizes rotation transformation, but does not compare or discuss the following related rotation-based PTQ methods: - DuQuant (NeurIPS 2024) - OstQuant (ICLR 2025) Considering their relevance, I recommend including a comparison with DuQuant, and OstQuant can be an additional option for comparison. Other Strengths And Weaknesses: The lattice-based quantization approach is a promising and interesting concept. Other Comments Or Suggestions: More emphasis could be placed on evaluating the method's performance with larger models and its impact on inference speed and memory usage. Questions For Authors: 1. Could you include a comparison with DuQuant and OstQuant in the paper? 2. Please consider adding more experiments on larger models, particularly Llama3-70B and Qwen2.5-7B. 3. Could you provide a detailed analysis of the inference speedup and memory usage, especially given the new lattice-based quantization method? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing valuable feedback! Here, we provide the answers to the questions. ### **Evaluation on other models** We have conducted additional experiments of NestQuant in compressing other LLMs. We chose models in Llama-2 series (7B, 13B, and 70B) and Llama3-70B, as most popular in the prior quantization literature (thus facilitating comparison). We considered three regimes: weight only, W+KV, and W+KV+A. We quantize the models to < 4 bits by choosing ($q=14$) for a fair comparison with other methods. Please refer to the table in response to reviewer a2PJ for Wikitext2 perplexity results with context length 2048. NestQuant consistently achieves better perplexity metrics across different models for both weight-only regime and full quantization. In fact, for all models we have tested, except Llama-2-7B, NestQuant with W4A4KV4 quantization even outperforms previous works with W4A4KV16. We hope that the new evaluation results would provide more evidence for robustness of the performance advantage of NestQuant. ### **Comparison with DuQuant and OstQuant** Indeed we should have included these important references, and we thank the reviewer for this comment. We now compare our numerical results to the results reported in those references under the same setup. In all setups we’ve tested, NestQuant improves over the ppl reported in those papers. On a deeper level, both DuQuant and OstQuant can be viewed as “pre-processing” techniques that create weights and activations that are “easier” to quantize, while the quantization method used there is essentially round-to-nearest (RTN). The main innovation in NestQuant is the ability to use quantizers that are more effective than RTN, at the expense of additional cycles in encoding/decoding complexity. These quantizers are based on “good” nested lattices and on the overload avoidance mechanism. Thus, the approaches in DuQuant/OstQuant and that of NestQuant actually complement each other. Combining the two approaches is an excellent direction for future research that is expected to further improve the SOTA. ### **Latency measurements** We acknowledge that the significant improvement that NestQuant achieves in perplexity comes at a cost of slower quantization/dequantization speed. We provided FLOP count in our appendix, which show that the computational burden of NestQuant is reasonable. Creating a fully-optimized CUDA kernel for NestQuant is indeed an important task for future work. As a temporary demonstration that NestQuant dequantization can indeed be fast on GPUs, we have implemented a CUDA kernel for a suboptimal version of NestQuant, which we refer to as NestQuantM. The difference between NestQuant and NestQuantM is that in the latter, the E8 optimal nearest neighbor decoder is replaced by a sub-optimal decoder. NestQuantM is close in ppl performance to NestQuant and still outperforms SOTA (see the ppl results table). Our CUDA Kernel does GEMV (matrix-vector) multiplication, assuming the matrix is quantized with NestQuantM and the vector is unquantized. Below, we provide the runtime of NestQuantM GEMV kernel, compared to other baselines. The measurements are made for a $8192 \times 8192$ matrix on A100 GPU. | Method | Time (us) | | -------- | ---------- | | Baseline (16 bits) | 97 | | NestQuantM (4.25 bits) | 50 | | QuIP# (2 bits) | 38 | | QuIP# (4 bits) | ~75 | | int4 uniform | 31 | Note that we estimate the runtime of QuIP# (4 bits) as double of the runtime of QuIP# (2 bits), since the size of the matrix that is loaded from DRAM doubles and the E8P dequantization procedure needs to be ran twice due to RVQ. Also, while we use 4.25 bit version with q=16 for benchmarking, the ppl for NestQuantM is still computed with q=14. This kernel makes NestQuantM a practical quantization scheme for weight and KV cache quantization. We note that uniform quantization algorithms achieve a much higher perplexity values. Other vector quantization methods, such as QuIP#, QTIP, or AQLM, are only usable for weight quantization, since they require a significantly more complex encoding (e.g. QTIP requires solving dynamic programming problem for finding optimal trellis path). Additionally, we note that NestQuant can operate at any rate $R=\log_2(q)$ for integer $q$ with essentially no change in the code. We also wanted to stress again that the goal of this work was to improve the frontier of the tradeoff between the quality of the model and the amount of bits transferred into/from the compute cores. At this stage, we did not yet focus on improving actual generation speed. In terms of utility, however, we want to mention that transferring one byte from DRAM (HBM) requires 3000x the amount of energy of int8 multiplication (Horowitz, doi: 10.1109/ISSCC.2014.6757323). Thus, even accounting for the extra algorithmic overhead we believe that NestQuant already gives improved energy consumption for a given perplexity compared to other schemes.
Summary: The presented work proposes to use in an improved quantization technique for LLMs that wastes 17% less quantization "space". Written well and very balanced (theory + practical verification). ## update after rebuttal Initially my own (internal) score was between 3 and 4 and I rounded up to 4, because I found the presentation compelling enough and the results good. The very healthy discussion here reaffirmed me that the submission should be accepted (for me it is now a solid 4) and hope the authors add all the additions discussed to the final version. Thank you to the authors for also actively supporting the rebuttal! Claims And Evidence: The claim of improvements by using an improved technique is proven by experimental results (up to 20% relative).< Methods And Evaluation Criteria: All standard evaluation criteria have been applied, also perplexity which is known to be most sensitive. Theoretical Claims: Validated to be correct. I certainly appreciate the appendices much! Experimental Designs Or Analyses: Good. Supplementary Material: I didn't review supplementary material in detail as it contains mainly (Python) code. Relation To Broader Scientific Literature: As the technique has been inspired by other work (Voronoi regions), I expect influence of a broader audience than just ML. Essential References Not Discussed: Everything good. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Indeed we believe that this work may be interesting also to researchers outside the ML community. In particular, NestQuant consists of a combination of various ideas, including random rotations, nested lattice quantization for the induced Gaussian-like vectors, the overload avoidance mechanism with multiple beta values together with Algorithm 6 designed for choosing those values. These ideas span information theory, communications, and even pure math (the lattices used for MatMul quantization should have properties that were not previously studied - in particular, their covariance matrix should have a small Frobenius norm). We certainly hope this work will initiate a broader interest in PTQ for LLMs within the mentioned research communities. In addition to the evaluations provided in the paper, we also perform additional evaluations of NestQuant on models from Llama 2 series (7B, 13B, 70B), and Llama-3-70B (though we didn't have enough time to finish simulation of W+KV+A quantization for this model). The table below shows the Wikitext2 (seq. length 2048) perplexities of NestQuant and other PTQ methods, with NestQuant outperforming state of the art for most models and setups. | Bits (W-A-KV) | Method | Llama-2-7B | Llama-2-13B | Llama-2-70B | Llama-3-8B | Llama-3-70B | |---------------|----------------|------------|-------------|-------------|-----------|-------------| | 16-16-16 | Floating point | 5.47 | 4.88 | 3.32 | 6.14 | 2.86 | | 4-16-16 | QuaRot | 5.60 | 5.00 | 3.41 | - | - | | | QuIP# | 5.56 | 4.95 | **3.38** | - | - | | | OstQuant | 5.64 | 4.94 | 3.41 | 6.53 | 3.19 | | | NestQuant | **5.53** | **4.93** | **3.38** | **6.31** | **3.14** | | | NestQuantM | 5.55 | 4.95 | - | 6.35 | - | | 4-16-4 | NestQuant | 5.57 | 4.96 | 3.39 | 6.37 | 3.19 | | | NestQuantM | 5.59 | 4.99 | - | 6.49 | - | | 4-4-16 | SpinQuant | 5.9 | 5.2 | 3.8 | 7.1 | - | | | OstQuant | 5.60 | 5.14 | 3.57 | 7.24 | 3.97 | | | DuQuant | 6.08 | 5.33 | 3.76 | - | - | | 4-4-4 | QuaRot | 6.10 | 5.40 | 3.79 | 8.16 | 6.66 | | | SpinQuant | 5.9 | 5.3 | 3.8 | 7.3 | - | | | OstQuant | 5.91 | 5.25 | 3.59 | 7.29 | 4.01 | | | NestQuant | **5.67** | **5.03** | **3.49** | **6.63** | TBD | | | NestQuantM | 5.73 | 5.07 | - | 6.82 | - | In this table, there is a method "NestQuantM", which corresponds to NestQuant with a simpler, but more computationally efficient E8 lattice oracle used in the decoding. We evaluated NestQuantM for Llama-2-7B, Llama-2-13B, and Llama-3-8B We have attached a figure: https://imgur.com/a/bNW1D7K for reference.
null
null
null
null
null
null
Understanding the Emergence of Multimodal Representation Alignment
Accept (poster)
Summary: This paper aims to understand the properties under which alignment emerges in multi-modal models. Specifically, they studied the influence of the data similarity (heterogeneity, i.e., how similar are two modalities) and uniqueness/redundancy of information (a.k.a. information imbalance) between the modalities on alignment (Figs. 2, 4-6, 10, 14-17). Alignment is measured via Huh et al’s [1] KNN-based center kernel alignment variant (also see Appendix B). Further, they studied how alignment correlates with performance (Figs. 7-9, 11-13, 18-22, Tab. 1). To answer these questions, they designed a synthetic dataset (Fig. 3) to control for uniqueness and heterogeneity. They corroborate the synthetic results with experiments using the Wikipedia caption dataset [5] and MultiBench [2]. ## update after rebuttal Please see my rebuttal comment below. --- ## References [1] Huh, Minyoung, et al. "The platonic representation hypothesis." arXiv preprint arXiv:2405.07987 (2024). [2] Liang, Paul Pu, et al. "Multibench: Multiscale benchmarks for multimodal representation learning." Advances in neural information processing systems 2021.DB1 (2021) [3] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625. [4] Schrodi, Simon, et al. "Two effects, one trigger: on the modality gap, object bias, and information imbalance in contrastive vision-language representation learning." arXiv preprint arXiv:2404.07983 (2024). [5] Srinivasan, Krishna, et al. "Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning." Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 2021. Claims And Evidence: * Maximum achievable alignment is controlled by uniqueness of the input modalities. This is well-supported by the experimental evidence in Figs. 2, 4-6, 14-17 on synthetic as well as real data. The results for heterogeneity (Figs. 6, 10) are less clear. * Performance is not directly correlated to alignment (Sec. 5). This is again supported by Figs. 7, 8. Again, the results for heterogeneity seem less clear (Fig. 9). Methods And Evaluation Criteria: * The KNN-based variant of center kernel alignment based on Huh et al [1] is well-suited to evaluate alignment. However, other alignment measures would be appreciated since properly measuring alignment is challenging. * The synthetic and real datasets, as well as chosen models are well-suited. Theoretical Claims: N/A Experimental Designs Or Analyses: * The synthetic dataset is well-designed and motivated to cleanly study the effect of uniqueness and heterogeneity (Fig. 3). * Correlation is measured by the Pearson correlation coefficient. However, rank-based correlation coefficients would be a bitter fit, like Spearman or Kendall’s $\tau$, since they don’t assume a relationship a priori (beyond that the order should matter). Supplementary Material: I’ve skimmed over Appendix A, closely read Appendix B and C, and checked additional result figures in Appendix D for consistency with the results in the main paper. Relation To Broader Scientific Literature: Huh et al [1] put forward the platonic representation hypothesis. This paper investigates key data properties (information balance and data heterogeneity) on the alignment. I’d like to note that work by Schrodi et al [4] also investigated information imbalance in the context of the modality gap and object bias for CLIP models (see below for more details). Findings and experiments seem related, though the scope is different. Thus, I conclude that this work is a valuable contribution on understanding how data shapes the models, in this case their representational alignment. Essential References Not Discussed: * Schrodi et al [4] showed that information imbalance causes the modality gap and object bias. Information (im)balance (called information redundancy/uniqueness in this work) is also the data property studied in this work (besides heterogeneity). Particularly, they hypothesized that less shared information worsens alignment that leads to the modality gap and object bias. Further, some findings and experiments share a resemblance. Thus, it’d be good to discuss the similarities and differences to Schrodi et al. in future versions. Other Strengths And Weaknesses: * S: the paper is well-written and clear. Other Comments Or Suggestions: * It’d be good to make more explicit how alignment is measured between visual-only models like DINOv2 and the LLMs, as done by Huh et al [1]. Questions For Authors: * In the synthetic data, is information always redundant or unique for each sample or can this vary per-sample? E.g., for one sample a factor is part of the data while for another sample it is not. * Why does the first encoder in the synthetic setting only have a single layer? * How are the models trained in the synthetic setting? I.e., what type of training method is used? CLIP loss? Captioning loss? * How are correlations computed? Since there are many points per x-value across all plots, this should lower correlations, right? * What is the effect of the number of task-relevant features? Currently, it is only set to 8. What happens when you set it to 256? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer and are glad that they find our experiments well-designed and motivated. Below we address the reviewer’s comments and questions. **Under “Methods And Evaluation Criteria:”** > The KNN-based variant … other alignment measures would be appreciated since properly measuring alignment is challenging. We report additional results with unbiased CKA with a RBF kernel, Mutual KNN [1], SVCCA [2] with three different sample sizes, and all metrics support our paper’s main claims. See response to reviewer B2rQ for more details. [1] Huh et al. "The platonic representation hypothesis." (2024). [2] Raghu et al. “SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability” (2017). **Under “Experimental Designs Or Analyses:”** > Correlation is measured by the Pearson correlation coefficient. However, rank-based correlation coefficients would be a bitter fit, like Spearman or Kendall’s $\tau$, since they don’t assume a relationship a priori (beyond that the order should matter). We are happy to explore other correlation metrics. [Here](https://tinyurl.com/h3fv2ez3) are the results using Spearman correlation. We find that the Spearman correlation supports our main claims: it shows a strong negative trend between maximum alignment and uniqueness and that the relation between alignment and performance is weaker or negative when there is greater uniqueness. Also, we note the Pearson correlation might actually be a suitable choice given common observations that linear relations in latent representations tend to emerge after training (e.g., see https://arxiv.org/abs/2007.00810). **Under “Essential References Not Discussed:”** > Schrodi et al [4] showed that information imbalance causes the modality gap and object bias. …It’d be good to discuss the similarities and differences to Schrodi et al. in future versions. Thank you for bringing up this related work. We agree that [4] is related to our work in that it analyzes the effect of information imbalance on the representations learned through contrastive learning, whereas our work focuses on emerging alignment through increased model capacity. We will add a discussion of [4] in our updated paper. **Under “Other Comments Or Suggestions:”** > It’d be good to make more explicit how alignment is measured between visual-only models like DINOv2 and the LLMs, as done by Huh et al [1]. The details of the alignment computation are in Appendix B as follows: “Following Huh et al. (2024), we use $k=10$ nearest neighbors over 1024 samples from the Wikipedia caption dataset. For the vision model, the class token of each layer is used, and for the language model, the embeddings of a given layer are average pooled to a single token. $l_2$ normalization is applied to the features and elements in the features that are above the 95-th percentile are truncated.” We’re happy to add any additional details that the reviewer thinks are missing. **Under “Questions For Authors:”** > In the synthetic data, is information always redundant or unique for each sample or can this vary per-sample? E.g., for one sample a factor is part of the data while for another sample it is not. The proportion of redundant to unique information is constant for all samples. > Why does the first encoder in the synthetic setting only have a single layer? We provide additional experiment results demonstrating that our results are unchanged when $E_1$ is a higher depth. [Here](https://tinyurl.com/4uwe4zkm), we change the depth of $E_1$ to 2 and 3 and find that the results are not significantly changed. We hypothesize that because $E_1$ is trained on the untransformed modality, $E_1$ will remain relatively easy to optimize even as the depth increases. We will include these results in our updated paper. > How are the models trained in the synthetic setting? I.e., what type of training method is used? CLIP loss? Captioning loss? In the synthetic setting, the ground truth labels are available, the models are trained in a supervised manner, with cross-entropy loss. > How are correlations computed? Since there are many points per x-value across all plots, this should lower correlations, right? In Fig. 4 and 5, the correlation is computed using only the maximum alignment rather than all points. We will clarify this in our updated paper. > What is the effect of the number of task-relevant features? Currently, it is only set to 8. What happens when you set it to 256? The total number of task-relevant features would not impact our results -- what matters is the proportion of redundant to unique features. If we had 256 task-relevant features, out of which 128 are shared, we would expect to see that the result is similar to $U=4$ in our setting. Our results on real-world data are on much higher dimensions -- for our experiments on Wikipedia Image Text and on MM-IMDb, we use LLMs with high dimensional latent spaces of 1024 or greater. --- Rebuttal Comment 1.1: Comment: I thank the reviewer for their replies to others and my reviews. I tend to uphold my score of 4 despite the critiques brought up by other reviews. In particular, I think the chosen alignment metric is a justified choice and the added evaluations using other metrics provides sufficient evidence to support the authors’ claims. That said, I do have a follow-up question regarding “is information always redundant or unique” since the response has not addressed the core of my question: What happens if certain information is redundant for some samples but unique for others? For example, color might be redundant in some cases but unique in others. How would this variability of whether information is redundant or unique affect the findings? --- Reply to Comment 1.1.1: Comment: > That said, I do have a follow-up question regarding “is information always redundant or unique” since the response has not addressed the core of my question: What happens if certain information is redundant for some samples but unique for others? For example, color might be redundant in some cases but unique in others. How would this variability of whether information is redundant or unique affect the findings? We thank the reviewer for this insightful and intellectually stimulating question. To recapitulate, the reviewer inquires whether the notions of redundancy and uniqueness, as used in our work, should be regarded as *aggregate* quantities—computed over the joint distribution of variables—or whether a *pointwise* (i.e., sample-specific) formulation might be more appropriate or feasible. In our study, we adopt the definitions of redundancy, uniqueness, and synergy as formalized in the Partial Information Decomposition (PID) framework [1]. These definitions are intrinsically grounded in mutual information, which is, by construction, an expectation over the joint distribution of the relevant random variables. That is, mutual information quantifies average statistical dependence and does not inherently attribute information values to individual data points or observations. Consequently, the redundancy, uniqueness, and synergy measures derived from mutual information are likewise aggregate in nature: they describe global statistical properties of the system rather than localized or instance-specific contributions. We concur with the reviewer that it is conceptually plausible—and potentially of practical significance—to consider localized (e.g., pointwise or groupwise) versions of information-theoretic measures. In particular, a pointwise PID could yield valuable insights in contexts such as instance-level model interpretability, attribution analysis, or context-sensitive decision-making, where global averages may fail to capture the heterogeneity of information contributions across samples. However, the development of a sound theoretical framework for such a decomposition remains an open research problem, requiring new mathematical tools and likely new conceptual foundations. To the best of our knowledge, a fully general pointwise formulation of the PID components—particularly one that adheres to the axiomatic foundations of the framework—has not been rigorously established in the literature. Accordingly, while we recognize and appreciate the importance of this perspective, we believe that a rigorous treatment of pointwise or groupwise PID components falls outside the scope of the present work. We thus leave this as an important and compelling direction for future research. [1] Williams et al. “Nonnegative decomposition of multivariate information” (2010).
Summary: This paper presents an empirical investigation of alignment between models with possibly different architectures and trained over different modalities. The authors investigate under which conditions the so-called Platonic Representation Hypothesis is likely to arise based on the heterogeneity of the data modalities and uniqueness in information. The empirical findings over multiple simulated data and a multi-modal benchmark reveal that alignment is not necessarily correlated to an increase in model performance, hence establishing that alignment between models can arise only under specific experimental conditions. Claims And Evidence: The main claim is that the two axes the authors have proposed to measure, namely uniqueness of information and heterogeneity, are responsible for more or less alignment in trained models. This is an interesting proposal that relates to other studies in catastrophic forgetting in continual learning, see e.g. [1]. Authors provide evidence of this relation in several synthetic datasets, generated according to these two axes of variation, and on real-world data and baselines. The main investigation is to uncover if alignment correlates with model performance and scale. Overall, the evidence supports the claims that this depends on uniqueness and the modality gap. It remains open whether and how models trained on larger datasets (consisting of multiple degrees of uniqueness) can express a higher degree of alignment. This is a more challenging scenario to test, worth spelling out in the conclusions. Methods And Evaluation Criteria: The methods and evaluation are clear for the synthetic experiments. I struggled a bit to understand how authors investigate the real-world datasets and models. I require further clarifications from the authors that can help in reading and assessing the quality of their evaluation, see questions. Overall, I'm leaning positively towards the analysis the authors conducted. Theoretical Claims: N/A Experimental Designs Or Analyses: I focused more on synthetic experiments to understand the core message there. Why do the authors choose a non-linear transformation only for the second modality? Would it have been sensible to have it also for the first modality? I suggest including random baselines when alignment is measured and plotted (RQ1). Supplementary Material: N/A Relation To Broader Scientific Literature: Understanding the datasets shared information or uniqueness is something relevant also in Continual Learning [1]. There, this information can lead to more or less catastrophic forgetting. Also, the multimodal setup where representations are compared resembles theoretical works on identifiability for the case of independent component analysis [2]. This connection can be helpful for new theory-oriented works. [1] Toward Understanding Catastrophic Forgetting in Continual Learning, Nguyen et al. (2019) [2] The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA, Gresele et al. (2019) Essential References Not Discussed: N/A Other Strengths And Weaknesses: Figures are helpful to understand the message. Other Comments Or Suggestions: One minor note: I do not entirely understand how the bottom part with triangles and circles should be interpreted. There is a repetition at the beginning of section 3, from line 118 onwards. The same sentence appears in line 65. Questions For Authors: About real-world experiments: 1) How do you measure uniqueness for MOSEI, MOSI, URFUNNY, etc? Not entirely clear how this is evaluated from human annotation 2) How is it the case that perturbations of the input correspond to changing uniqueness? This aspect is not clear and is a bit toy. There is the risk that perturbed strings and images can create out-of-distribution inputs for both vision and language models. Is it sensible to expect there, where much information is distorted, to see any alignment at all? 3) Lines 291-300 are not clear about the upper limit for alignment. How is this tested or referenced? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer and are glad that they find our experimental evidence convincing. Below we address the reviewer’s comments and questions. **Under “Claims and Evidence:”** > It remains open whether and how models trained on larger datasets (consisting of multiple degrees of uniqueness) can express a higher degree of alignment. We present new results on MM-IMDb [1], a dataset for classifying movie genres with 25k paired images and texts. [Our results](https://tinyurl.com/5my72nnb) demonstrate that the relation between alignment and performance varies depending on the classification task (see response to B2rQ for more details), suggesting that the degree of alignment depends significantly on the downstream task. [1] Arevalo et al. “Gated multimodal units for information fusion” (2017). **Under “Experimental Designs Or Analyses:”** > I focused more on synthetic experiments to understand the core message there. … Would it have been sensible to have it also for the first modality? We acknowledge that there are many ways of defining heterogeneity, however, a benefit of leaving the first modality untransformed is that the representation that $E_1$ learns is an ideal one -- because it has a direct linear relationship with the labels -- aligning with this ideal representation could imply that the model has learned something really universal: a requirement for the Platonic hypothesis. Then if $E_2$’s representation is highly aligned with $E_1$, we can infer that $E_2$ has learned to recover information that is comparable to the untransformed modality. Nevertheless, transforming both modalities may yield insightful results, and we leave the exploration of different types of heterogeneity to future work. > I suggest including random baselines when alignment is measured and plotted (RQ1). We have run experiments computing alignment between randomly initialized neural networks [here](https://tinyurl.com/5fwdecad). Results confirm that the alignment of these neural networks is constant with respect to uniqueness and that there is no correlation between alignment and performance on average. **Under “Relation To Broader Scientific Literature:”** > Understanding the datasets shared information or uniqueness is something relevant also in Continual Learning [1]. Thank you for bringing up these related works. We will include a discussion in our updated paper. **Under “Other Comments Or Suggestions:”** > One minor note: I do not entirely understand how the bottom part with triangles and circles should be interpreted. In Figures 1, the triangles and circles represent data from different modalities. In Figure 2, the triangles (and other shapes) also represent data from different modalities with varying degrees of heterogeneity. We will clarify this in the final version of our paper. > There is a repetition at the beginning of section 3, from line 118 onwards. The same sentence appears in line 65. Thank you for pointing this out. We will remove the redundancy. **Under “Questions For Authors:”** About real-world experiments: > How do you measure uniqueness for MOSEI, MOSI, URFUNNY, etc? Not entirely clear how this is evaluated from human annotation While we do not rely on exact estimates of uniqueness for the MultiBench datasets, past work [1] has sampled several data points and asked human annotators to rate the redundancy and uniqueness of each example. These ratings are shown to agree with computational estimates of redundancy and uniqueness for various MultiBench datasets. [1] Liang et al. “Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework” (2024) > How is it the case that perturbations of the input correspond to changing uniqueness? … Is it sensible to expect there, where much information is distorted, to see any alignment at all? We agree that our method of perturbing the Wikipedia caption dataset is not fully aligned with our definition of uniqueness. Hence, we provide new experiment results on MM-IMDb and improved our experiments on the Wikipedia-Image Text dataset. We use GPT-4 to synthesize text captions with unique information that is not present in the images, ensuring that the resulting datasets retain the semantics of real-world text and images. Our findings support our paper’s key claims (see response to obVJ for more details). > Lines 291-300 are not clear about the upper limit for alignment. How is this tested or referenced? While the theoretical upper bound for alignment, based on the HSIC metric, is 1, our empirical results (Figures 4 and 5, indicated by the red dot) show that the observed upper limit is significantly lower. We therefore conjecture that the maximum achievable alignment is constrained by the amount of shared information between the two modalities. We acknowledge that this argument is not rigorously formalized, and we will clarify this point in the updated version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the extensive reply. > Our results demonstrate that the relation between alignment and performance varies depending on the classification task (see response to B2rQ for more details), suggesting that the degree of alignment depends significantly on the downstream task. Can you elaborate on this? So, whether it is classification or image captioning or something else? This does not answer the question of what happens if you have bigger and bigger datasets, because this is the standard case for training VLMs. > Nevertheless, transforming both modalities may yield insightful results, and we leave the exploration of different types of heterogeneity to future work. Yes, it would be useful to include that. > We have run experiments computing alignment between randomly initialized neural networks here. Results confirm that the alignment of these neural networks is constant with respect to uniqueness and that there is no correlation between alignment and performance on average. Thank you. > We agree that our method of perturbing the Wikipedia caption dataset is not fully aligned with our definition of uniqueness. Hence, we provide new experiment results on MM-IMDb and improved our experiments on the Wikipedia-Image Text dataset. This looks cool, thank you. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the suggestion of exploring different definitions of heterogeneity. We will include a discussion of how our analysis framework can be extended to different types of heterogeneity as future work. > Can you elaborate on this? So, whether it is classification or image captioning or something else? This does not answer the question of what happens if you have bigger and bigger datasets, because this is the standard case for training VLMs. By different downstream tasks, we meant that MM-IMDb has 23 categories of movies, and thus the multilabel classification task can be broken down into 23 binary classification tasks (e.g. classifying genre 1 vs. all other genres). We wanted to present a new use case of our analysis that would be relevant to larger datasets for which there are typically many downstream tasks (which can extend to generative tasks, such as image captioning as the reviewer pointed out). To clarify our answer to the original question of how alignment changes when there are potentially many degrees of uniqueness, we demonstrate that the alignment-performance correlation depends on the amount of unique information **that is task relevant**. In the case of MM-IMDb, even though the text modality can contain many degrees of uniqueness compared to the image (as the text summarizes the plot of the movie), not all of the additional information that the text provides about the plot would be useful to the given classification task. Therefore, our analysis would reveal for each task whether the degrees of uniqueness are task-relevant. Smaller linear fit slopes to alignment-performance scores suggest that aligning modalities is less helpful for certain tasks, in which case practitioners should focus on modeling unique information.
Summary: This paper mainly focus on analyzing the emergence of multimodal representation alignment. Alignment between cross-modal representations has been long regarded as an important factor of improving multimodal model performance. Some recent researches have found that independently trained unimodal models can be implicitly aligned. The authors aim at finding out the opportunity and reasons for alignment emergence, and whether such alignment is a indicator of performance. Through comprehensive synthetic and real-world dataset experiments, the author reach to several conclusions. 1. The alignment may not be universally beneficial. 2. Such alignment impacts on performance differently among datasets and tasks. Claims And Evidence: The authors mainly discuss the emergence of implicit alignment in multimodal training. There are basically following claims in this paper: 1. Under low uniqueness, the alignment is significantly correlated with performance and model capacity. However, when uniqueness increases, such relationship becomes much weaker. 2. Alignment alone is not a sufficient predictor of model performance especicially in multimodal settings of uniqueness and heterogeneity. Although I have several concerns about the experimental designs and their support to the final conclusions, I generally agree with the claims within the paper. My major question concerns the heuristic of this paper on modern multimodal model design. Although the authors provide detailed experiments and analysis, the conclusions seem to be obvious and intuitive. The observation that alignment is less related to performance when uniqueness and heterogeneity increase is not much novel. In contrast, I am more curious about how such conclusions can impact on the design of modern models on various datasets or down-stream tasks, which however is less discussed thoughout the paper. Methods And Evaluation Criteria: This paper is mainly analytical without giving a method. The evaluation metrics of the paper, for example, CKA and uniqueness is reasonable. However, in Fig. 4 and 5, the notation $r$ is never introduced before. Theoretical Claims: This paper is mainly analytical without giving theoretical claims. Experimental Designs Or Analyses: I appreciate the designs of the synthetic experiments. The uniqueness assessment and label generation are reasonable. However, the experimental setup for real benchmark seems to be unaligned with the problem settings. Accordingly, on the Wikipedia caption dataset, the uniqueness of text and image data is implemented by random deletion and Gaussian perturbation, which is actually injetced noise. Such design seems to be opposite to the definition of uniqueness in line 155-162 that "Uniqueness in modality quantifies the amount of information present in the first modality absent in the second but critical for the downstream task", since noise can not be crucial. Thus the conclusion from real-world experiments may not be convincing. I am also concerned about the setting of unsymmetrical encoders in line 190-195, that $E_1$ is simply a single-layer encoder while $E_2$ is a deep encoder of varying depth. While I've noticed the second modality is simulated by a nonlinear transformation, such design can lead to issue that $E_1$ can easily learn a good representation while the optimization of $E_2$ can be much harder. Supplementary Material: The appendix of this paper mainly give details of datasets and supplementary experiments. Relation To Broader Scientific Literature: Please refer to the former parts of the review, the impact of alignment conclusions on the design of modern models on various datasets or down-stream task is less discussed in this paper. Essential References Not Discussed: No other related works need to be mentioned. Other Strengths And Weaknesses: While the paper is written straight-forward, here are several weaknesses in the writing that should be improved. For instance, the introduction of CKA is overlength in the second page. Since this is a contribution of previous works, details of this part is recommended to be placed in the Appendix. The explanation of $x_r$ and $x_u$ is duplicate in line 207-209 with line 171-176. The bijection $\phi$ in line 212 has also be introduced before. Other Comments Or Suggestions: No other suggestions. Questions For Authors: Please refer to the former parts. My major concerns include the novelty of the conclusions on implicit alignment, the impact of such conclusions on modern multimodal model designs and experimental designs. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the review. Below we address their questions and concerns. **Under “Claims and Evidence”:** > My major question concerns the heuristic of this paper on modern multimodal model design … which however is less discussed thoughout the paper. To the best of our knowledge, our analysis of the emergence of alignment across the dimensions of uniqueness and heterogeneity is novel and fills an important gap in the literature on cross-modal alignment. While prior work—such as the Platonic Representation Hypothesis [1]—suggests that alignment tends to emerge with increasing data scale and serves as an indicator of good performance, these claims have not been rigorously examined across key characteristics of multimodal data. In this paper, we critically evaluate these assumptions and argue that while alignment may indeed correlate with performance in settings where modalities share high redundancy (i.e., low uniqueness), this relationship breaks down when the modalities are more distinct. In such scenarios, increased alignment does not necessarily translate to better downstream performance. We believe this insight is not only novel but also practically useful, as it encourages practitioners to reconsider alignment strategies in cases where they may be counterproductive. We additionally explore the application of alignment-performance correlation for quantifying the information content of downstream tasks. Specifically, we present results on MM-IMDb [1], a dataset for classifying movie genres with image and text modalities. [Our results](https://tinyurl.com/5my72nnb) demonstrate that the relation between alignment and performance varies depending on the classification task (see response to B2rQ for more details), which can inform practitioners when aligning modalities is beneficial. [1] Arevalo et al. “Gated multimodal units for information fusion” (2017). **Under Methods And Evaluation Criteria:** > This paper is mainly analytical without giving a method. … However, in Fig. 4 and 5, the notation is never introduced before. Thank you for the feedback. We will update our paper to define Alignment as unbiased CKA and Unique as the number of unique features used in computing the label. **Under Experimental Designs Or Analyses:** > I appreciate the designs of the synthetic experiments. The uniqueness assessment and label generation are reasonable. However, the experimental setup for real benchmark seems to be unaligned with the problem settings. … the conclusion from real-world experiments may not be convincing. We agree that our method of perturbing the Wikipedia caption dataset is not fully aligned with our definition of uniqueness. To ensure that the perturbed dataset retains the semantics of real-world text and images, we provide new experiment results that leverage GPT-4 to synthesize text captions with unique information that is not present in the images. We keep the original image data without any additional noise. We upload our perturbed text data and code for generating the perturbations [here](https://tinyurl.com/8vxt9hby). For each (image, text) pair in the original dataset, we prompt GPT-4 to produce 10 captions with increasing levels of uniqueness: 10%, 20%, … 100%, such that the final caption contains only information that is unique to the text. As uniqueness is already introduced in the text, we keep the original images in the Wikipedia caption dataset. Using a pretrained sentence BERT model to quantify semantic similarity between the original caption and the GPT-4 captions, we find that the average semantic similarity monotonically decreases as the level of uniqueness increases. We compute the alignment between various types of vision models and LLMs. Our updated results support both claims: 1) The maximum alignment decrease with increased uniqueness. [see figure here](https://tinyurl.com/yeezwdxu) and 2) The slope of the fitted line to the alignment and performance scores decreases with increased uniqueness, showing that the relation between alignment and performance weakens. [see figure here](https://tinyurl.com/4ynj2td3). > I am also concerned about the setting of unsymmetrical encoders in line 190-195 … while the optimization of $E_2$ can be much harder. We provide additional experiment results demonstrating that our results are unchanged when $E_1$ is a higher depth. [Here](https://tinyurl.com/4uwe4zkm), we change the depth of $E_1$ to 2 and 3 and find that the results are not significantly changed. We hypothesize that because $E_1$ is trained on the untransformed modality, $E_1$ will remain relatively easy to optimize even as the depth increases. We will include these results in our updated paper. **Under Other Strengths And Weaknesses:** > While the paper is written straight-forward, here are several weaknesses in the writing that should be improved. Thank you for the feedback. We will revise our paper accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your explanation and detailed experiments. My concerns about unsymmetrical encoder depth has been solved. The misalignment between settings and experiments has also been made up through additional experiments. Both experiments are expected to be added during further revision. On the other side, my concerns about the novelty and practicability of the alignment analysis in this paper is still under underexplored. I am mostly convinced about the explored relationship between alignment with uniqueness and heterogeneity, as stated by in both the main paper and rebuttal. However, practitioners are more concerned about the impact of such relation on pratical usage. For instance, as facing a real-world large scale scenario, when and how should we measure such relation, how should we adjust the training procedure according to the relation. These questions are less discussed during the paper and rebuttal. Reviewer B2rQ seems to share similar concerns that "No new method is proposed, which limits the contribution of this paper." In conclusion, I will raise my score to 2 for detailed experiments, and will carefully consider my score if the authors can make further explanation or there is any comments made by other reviewers. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's response and the opportunity to further clarify our work. Below, we address their concern regarding the practicality of our analysis by providing additional experimental results. While we recognize the importance of practical implications, we would like to respectfully emphasize that the primary contribution of our study lies in the systematic refutation of the PRH—an aspect that, to the best of our knowledge, has not been previously established. Although our conclusion may align with intuitive expectations, we believe that this does not diminish the novelty of formally demonstrating that the PRH does not universally hold. > Thank you for your explanation … Both experiments are expected to be added during further revision. We will add the experiments to our updated paper. > On the other side, my concerns about the novelty and practicability of the alignment analysis in this paper is still under underexplored. ... For instance, as facing a real-world large scale scenario, when and how should we measure such relation, how should we adjust the training procedure according to the relation. These questions are less discussed during the paper and rebuttal. Reviewer B2rQ seems to share similar concerns that "No new method is proposed, which limits the contribution of this paper." We present the following use case for our analysis. Consider a practical setting where there is a large dataset of paired input data, but only a small subset of the dataset has labels for downstream tasks, due to the cost of annotation. An important problem is how can a practitioner utilize the supervision from the data subset while still ensuring good generalization by leveraging the unlabeled paired data? One approach is to finetune a pretrained model using both supervised loss and an explicit alignment objective, such as the CLIP loss. However, an important question comes up: how should the contribution of the supervised and alignment losses be balanced to maximize performance? The loss takes the form of $\mathcal{L} = \mathcal L_{sup} + w * \mathcal L_{\text{CLIP}} $ From our analysis, we know that the “ideal” amount of alignment is dataset and task-specific. Specifically, alignment-performance correlations have a direct algorithmic implication: if the alignment-performance correlation is small, then performance degrades or does not change when increasing the weight on the explicit alignment objective. Conversely, when the alignment-performance correlation is larger, performance should increase with larger weight on the alignment objective. To test this idea, we run experiments on the MM-IMDb dataset on 10 different binary classification tasks, where we sample 1024 labeled examples for each of the train, validation and test sets to simulate the data scarce scenario (in comparison to the original dataset size of 25k examples). The alignment-performance correlations can be easily computed with pretrained vision and language models using the sampled data. We start with vision and language encoders pretrained with CLIP and finetune the models with $\mathcal L$, where the weight on the alignment objective varies in $w \in \{0, 0.1, 0.25, 0.5, 1.0, 2.0, 5.0, 10.0, 50.0, 100.0\}$. In agreement with our analysis, [our results](https://tinyurl.com/mwpresr2) demonstrate that on the categories with lower alignment-performance correlation, increasing $w$ leads to worse performance, whereas for classes with higher-performance correlations, high values of $w$ improve performance. These results show that quantifying the relation between alignment-performance, **even with unimodal models that are not explicitly aligned**, is useful for practitioners when deciding how much to explicitly align the modalities. We envision that future work would make use of alignment-performance correlations to automatically determine weight on the alignment loss for each downstream task, making it possible to train on many tasks simultaneously without a combinatorially expensive hyperparameter search (if there are 23 tasks and 8 discrete values of $w$, there are 8$^{23}$ combinations of parameters to search over). We note that while we experiment with CLIP, our proposed framework is agnostic to the specific alignment loss. This is because our contribution is the **balance between a supervised objective that directly optimizes some downstream performance and an alignment metric, which is interchangeable.** Therefore, alignment-performance correlations remain useful regardless of whether the modalities are aligned through CLIP or a different approach such as FactorCL [1], as brought up by reviewer B2rQ. [1] Liang et al. “Factorized contrastive learning: Going beyond multi-view redundancy” (2023).
Summary: This paper empirically investigates when and why implicit alignment emerges, and whether alignment consistently predicts task performance, finding that both depend critically on modality similarity and the redundancy or uniqueness of the information provided. Claims And Evidence: The analysis that is conducted highly depends on the alignment quantification. Is the used metric, HSIC, sufficient to reflect the alignment quality? Such kernel-based metric is highly sensitive to the chosen kernel, sample size and other hyper parameters. If the metric cannot truly reflect the alignment level, the experiments, like Emergence of alignment across heterogeneity and uniqueness, are questionable. Methods And Evaluation Criteria: No new method is proposed. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: The experimental analysis of the synthetic dataset is interesting. However, more comprehensive experimental analysis should be performed on a large scale real-world dataset rather than a subset of MultiBench to study Emerge property. Supplementary Material: I have read the Alignment Computation and Additional Figures. Relation To Broader Scientific Literature: The results are relevant to the analysis of multimodal alignment. Essential References Not Discussed: Most relevant papers are discussed. Other Strengths And Weaknesses: The motivation for analyzing the alignment is interesting. Weakness: 1. Is the used metric, HSIC, sufficient to reflect the alignment quality? Such kernel-based metric is highly sensitive to the chosen kernel, sample size and other hyper parameters. If the metric cannot truly reflect the alignment level, the experiments, like Emergence of alignment across heterogeneity and uniqueness, are questionable. 2. No new method is proposed, which limits the contribution of this paper. 3. Most experimental analysis is based on the synthetic datasets, which is not convincing. More comprehensive experimental analysis should be performed on large scale real-world dataset, rather than a subset of MultiBench to study Emerge property. 4. It would be interesting to see the quantification of the uniqueness level in the real-world dataset. 5. [1] proposes that different random initializations could also cause a modality gap. Will this affect the conclusion of this paper? [1]. Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625. Other Comments Or Suggestions: see my weakness. Questions For Authors: see my weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive criticism and are glad that they find our analysis interesting. Below we address the reviewer’s questions and concerns. **Under “Claims And Evidence”:** > Is the used metric, HSIC, sufficient to reflect the alignment quality? … is highly sensitive to the chosen kernel, sample size and other hyper parameters. We believe that the HSIC metric is sufficient to capture alignment quality, as we use a specific linear kernel consistently across all experiments. Moreover, this kernel has no hyperparameters, making it the simplest choice. To verify the robustness of our results, we also evaluate them using alternative alignment metrics. We perform additional experiments on the synthetic data with additional alignment metrics and with different sample sizes which demonstrate that our findings are robust to hyperparameters and consistent across different metrics. Specifically, we report results with unbiased CKA with a linear kernel (our original alignment metric), unbiased CKA with a RBF kernel, Mutual KNN [2], SVCCA [3] and run all metrics with 256, 512 (our original sample size), 1024 data points. We report our [updated results here](https://tinyurl.com/yfzwzs2s). For all metrics and batch sizes, the maximum alignment decreases with increasing uniqueness as well as increasing heterogeneity. Additionally, the relation between alignment, performance, and depth are consistent across different batch sizes. Across all alignment metrics, performance and depth are positively correlated over different uniqueness values, whereas alignment and performance as well as alignment and depth correlations can be weak or negative for increased uniqueness, indicating that our findings are robust to different kernels and alignment metrics. [1] Kornblith et al. “Similarity of Neural Network Representations Revisited” (2019). [2] Huh et al. "The platonic representation hypothesis" (2024). [3] Raghu et al. “SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability” (2017). **Under “Other Strengths and Weaknesses”:** > No new method is proposed, which limits the contribution of this paper. While our paper does not propose a new method, we believe that our contributions are significant -- see our response to reviewer obVJ for a more in-depth discussion. > Most experimental analysis is based on the synthetic datasets … rather than a subset of MultiBench to study Emerge property. We would like to emphasize that MultiBench datasets (used in Section 6) are real-world, with CMU-MOSEI and UR-FUNNY containing 22k and 16k video snippets respectively. In addition, we provide new experiment results on MM-IMDb and improved our experiments on the Wikipedia-Image Text dataset. We use GPT-4 to synthesize text captions with unique information that is not present in the images, ensuring that the resulting datasets retain the semantics of real-world text and images. Our findings support our paper’s key claims (see response to obVJ for more details). > It would be interesting to see the quantification of the uniqueness level in the real-world dataset. We agree that quantifying uniqueness is an interesting direction, and our results have shown the potential for alignment-performance correlation to be used for quantification. While different pairs of modalities have varying levels of heterogeneity, which can make it difficult to quantify uniqueness across datasets, we propose that alignment-performance correlations can quantify information content between different downstream tasks within a given multimodal dataset. We present new results on MM-IMDb [1], a dataset for classifying movie genres with image and text modalities. Each movie can be labeled with 1 or more genres, and there are 23 classes. We compute cross-modal alignment using various vision models and language models. To measure performance, we train linear layers on the last layer hidden representations of the language models, resulting in F1-scores for each class. [Our results](https://tinyurl.com/5my72nnb) demonstrate that the relation between alignment and performance varies depending on the classification task — we see that the slope of the linear fit to alignment and performance scores is weak or even negative, suggesting that for certain movie genres, there is greater task relevant information that is unique to the language modality. [1] Arevalo et al. “Gated multimodal units for information fusion” (2017). > [1] proposes that different random initializations could also cause a modality gap. Will this affect the conclusion of this paper? We would like to clarify that our experiments on synthetic data are run with 5 different seeds, and for our experiments on MultiBench datasets, we compute alignment-performance correlation over 3 seeds. Hence, we believe that our results are robust to initializations. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply. I have a similar concern with the reviewer obVJ about "the impact of such relation on practical usage". I agree that the contribution of theoretical analysis could be significant. However, a more theoretical analysis and demonstration of practical usage are necessary to make the paper sound. The effect of uniqueness and shared information among different modalities has been quantified by FactCL [1]. I apologize for not bringing this paper in the first place. FactCL uses mutual information for analysis and measuring the impact of uniqueness in different modalities. Their quantification of "uniqueness" leads to a new novel method for alignment and demonstrates significantly better performance on both synthetic and real-world MultiBench datasets. How to utilize the proposed correlation relationships for practical usage is a big concern. Moreover, I am not sure if the correlation analysis is sufficient as correlation is not causality. A strong correlation does not tell you whether one variable causes changes in the other or whether both are driven by some unobserved factor. I will carefully consider my rating and appreciate if the authors could make further clarifications. [1]. Liang P P, Deng Z, Ma M Q, et al. Factorized contrastive learning: Going beyond multi-view redundancy[J]. Advances in Neural Information Processing Systems, 2023, 36: 32971-32998. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's response and the opportunity to further clarify our work. Below, we address concerns on the practicality of our analysis with additional experimental results. While we recognize the importance of practical implications, we would like to respectfully emphasize that the primary contribution of our study lies in the systematic refutation of the PRH—an aspect that, to the best of our knowledge, has not been previously established. Although our conclusion may align with intuitive expectations, we believe that this does not diminish the novelty of formally demonstrating that the PRH does not universally hold. > I have a similar concern with the reviewer obVJ about "the impact of such relation on practical usage" … necessary to make the paper sound. To demonstrate the practical usage of our analysis, we present the following use case. Consider a setting where there is a large dataset of paired input data, but only a small subset of the dataset has labels for downstream tasks, due to the cost of annotation. An important problem is how can a practitioner utilize the supervision from the data subset while still ensuring good generalization by leveraging the unlabeled paired data? One approach is to finetune a pretrained model using both supervised loss and an explicit alignment objective, such as the CLIP loss. However, an important question comes up: how should the contribution of the supervised and alignment losses be balanced to maximize performance? The loss takes the form of $\mathcal{L} = \mathcal L_{sup} + w * \mathcal L_{\text{CLIP}} $ From our analysis, we know that the “ideal” amount of alignment is dataset and task-specific. Specifically, alignment-performance correlations have a direct algorithmic implication: if the alignment-performance correlation is small, then performance degrades or does not change when increasing the weight on the explicit alignment objective. Conversely, when the alignment-performance correlation is larger, performance should increase with larger weight on the alignment objective. To test this idea, we run experiments on the MM-IMDb dataset on 10 different binary classification tasks, where we sample 1024 labeled examples for each of the train, validation and test sets to simulate the data scarce scenario (in comparison to the original dataset size of 25k examples). The alignment-performance correlations can be easily computed with pretrained vision and language models using the sampled data. We start with vision and language encoders pretrained with CLIP and finetune the models with $\mathcal L$, where the weight on the alignment objective varies in $w \in \{0, 0.1, 0.25, 0.5, 1.0, 2.0, 5.0, 10.0, 50.0, 100.0\}$. In agreement with our analysis, [our results](https://tinyurl.com/mwpresr2) demonstrate that on the categories with lower alignment-performance correlation, increasing $w$ leads to worse performance, whereas for classes with higher-performance correlations, high values of $w$ improve performance. These results show that quantifying the relation between alignment-performance, **even with unimodal models that are not explicitly aligned**, is useful for practitioners when deciding how much to explicitly align the modalities. We envision that future work would make use of alignment-performance correlations to automatically determine weight on the alignment loss for each downstream task, making it possible to train on many tasks simultaneously without a combinatorially expensive hyperparameter search (if there are 23 tasks and 8 discrete values of $w$, there are 8$^{23}$ combinations of parameters to search over). > The effect of uniqueness and shared information … FactCL uses mutual information for analysis and measuring the impact of uniqueness in different modalities. As discussed in our above response, our analysis is useful for understanding how representation alignment relates to performance on some downstream task, and therefore, practitioners would use our analysis to design a better training objective that optimally balances explicit alignment with direct optimization of downstream performance. While we experiment with CLIP, our proposed framework is agnostic to the specific alignment loss. We believe our work is complementary to the literature on improving explicit alignment objectives for paired, unlabeled data. We will clarify this difference in our updated paper. > Moreover, I am not sure … by some unobserved factor. We agree that correlation is not causality. However, we indeed show that the alignment-performance correlations have direct implications on how practitioners should balance explicit alignment with supervised learning. In addition, we have extensive experiments on synthetic and real-world settings, showing that factors such as uniqueness and heterogeneity impact the relation between alignment and performance. [1] Liang et al. “Factorized contrastive learning: Going beyond multi-view redundancy” (2023).
null
null
null
null
null
null
Neural Guided Diffusion Bridges
Accept (poster)
Summary: The paper considers the challenging and widely-applicable problem of conditioning a reference diffusion process to sample rare events or desired outcomes. Building on "guided proposal" approaches which construct the conditioned process for a cleverly-chosen tractable process, the authors propose to learn an additional control drift, parameterized using neural networks and minimizing a stochastic optimal control or mode-seeking KL objective via backpropagation through trajectories and the reparameterization trick. The authors demonstrate the efficacy of the proposed approach on simple linear examples, along with cell dynamics, a FitzHugh-Nagumo excitable system, and a stochastic landmark matching problem. Claims And Evidence: The method is well-justified and incorporates useful conditioning information via the guided proposal. As mentioned below, I encourage the authors to clarify the implementation and performance differences between the guided proposal and the proposed neural-network improvement, as the guided proposal provides a strong baseline in most of the experiments considered. Methods And Evaluation Criteria: The authors consider a range of applications for validating the efficacy of the proposed method. To ablate the necessity of the guided proposal, it might be interesting to consider the Neural Guided Bridge learning from the base drift directly (i.e. no guided proposal). While this may fail in more complex settings, it would be interesting to evaluate for the Brownian Bridge and OU-Process in Sec. 5.1. Theoretical Claims: I would like to see more explicit reasoning to justify Eq. 5 and 10-12, just to give the reader more intuition for (i) the evolution of the auxiliary h-function and the (ii) appearance of particular terms in Eq 10 (particularly the Hessian). I have confirmed the correctness of of the proposed method. Experimental Designs Or Analyses: The authors compare to existing methods such as score matching and adjoint bridges, along with the vanilla guided proposal. While score matching and adjoint bridge appear to struggle with simple examples, the authors probe examples where the neural guided bridge improves over the vanilla guided proposal. *Cell Dynamics* - For cell dynamics with multi-modality in the conditional paths, the neural guided bridge appears to beter match the marginals of the unconditioned process in Fig 7. -Is this the right evaluation? (assuming the Original entry is obtain from the unconditional process). One could imagine that some modes of intermediate marginals do not result in the process hitting the desired v. *FHN Model* - the neural guided samples in Fig 9-10 are somewhat mode-seeking, but do not produce samples which interpolate between the modes (as in the guided proposal). Here the reference process samples are appropriately filtered according to consistency with $v$. Supplementary Material: **The authors should make every effort to include as many experimental results as possible in the main text.** At present, **all** experimental results are in the supplementary material. The additional page available in the camera-ready should help with this. Relation To Broader Scientific Literature: The paper builds on the guided proposals of Schauer et. al 2017, Mider et. al 2021 using neural network control drifts. The latter idea has been an active area of recent research in diffusion model finetuning and diffusion bridge literature, along with transition path sampling applications in computational chemistry. However, these works often use the base drift of the original problem without including additional drift terms from the auxiliary, tractable bridge process. Hence, I expect the paper to be interesting to the ICML community. Essential References Not Discussed: The authors might consider more fundamental citations regarding Prop 3.2, which appears to be well known. If I am missing technical conditions relevant to the setting the authors have in mind (which require this recent result), then these might be stated. *Non-essential recent work:* - Denker et. al (NeurIPS 2024), "DEFT: Efficient Fine-tuning of Diffusion Models by Learning the Generalised h-transform". Of particular note is the Stochastic control objective using Log Variance divergence (their conditional score matching training is outside the setting of this paper). - Seong et. al (ICLR 2025): "Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers": context of Transition Path sampling in computational chemistry using a control objective and Log Variance divergence - on the surface, Log Variance losses for stochastic control problems (i) leverage off-policy samples and (ii) while this is an additional design choice, clever choice of exploration samples may mitigate mode-seeking behavior associated with the KL. These considerations would be left for future work. The authors might also be interested in recent work improving upon the adjoint method for backpropagation through trajectories (and references therein). - Domingo-Enrich et. al 2024, "Adjoint Matching" Other Strengths And Weaknesses: See other comments Other Comments Or Suggestions: 1) There is enough notation appearing that it might be useful to remind the reader of several aspects. - In Eq 15, it would useful to be able to denote $\tilde{b}^{\circ}$ as a conditioned process with reference drift given by the linear $\tilde{b}_t$ in Eq 9, or simply rewrite the full form of Eq 15. - In reading the paper, I had to remind myself that the Brownian bridge "prior information" $\frac{v-X_t^{\bullet}}{T-t}$ was not an ad-hoc decision, but rather is captured by $\tilde{b}^{\circ}$ (even for $\beta(t)=0, B(t) =0$ in choosing $\tilde{b}_t$). It might be useful to state that the auxiliary process / guided proposal produces this term. 2) I had trouble parsing $q(v|y)$ as the likelihood at first glance, and it is written as $\ell(y|v) = q(v|y)$ in line 150L. Please mention the desired role of $v$ before Prop 3.2 (or even provide the full Bayesian view). 3) Why does it make sense to use $b(t,v)$ as conditioning at intermediate time points, given the fact that $r(s,x)=∇ \log h(s,x) = ∇ \log \int p(T, y|t,x) q(v|y) \nu(dy)$. Further, why is this meaningful when $v \in \mathbb{R}^{d^\prime}$ with $d^\prime < d$? Questions For Authors: 4) Is the pCN used for experiments with the guided proposal? I think this is difficult for the uninitiated reader to follow, particularly when it comes to Lines 347-355 R comparing the 5) If MCMC vs. no-MCMC is a distinction between the guided proposal and neural guided proposal, then this should be more clearly emphasized to distinguish the presented method and demonstrate its efficacy over the guided proposal. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Claims and Evidence** The guided bridge relies on training the neural network. Once trained, independent samples are obtained. The quality of the guided proposal depends much on the nonlinearity of the diffusion and number of pCN-steps required in the MCMC-algorithm. This is case dependent. For the performance evaluation, please refer to the table in our reply to Reviewer 8sR1. **Methods and Evaluation Criteria** Inclusion of the guiding term from the guided proposal is necessary to ensure the guided neural process will be a bridge. Without it, the neural net has to learn the unbounded term which behaves near the endpoint like $(v-x)/(T-t)$ (in case of a full observation and a uniformly elliptic diffusion). **Theoretical Claims** 1. A proof that $E_t$ in Eq. (5) is a local martingale is given in Palmowski & Rolski: it follows from partial integration and involves Dynkin's martingale. We believe this to be out of scope for this paper as it heavily depends on stochastic calculus. 2. For Eq. (10): the ``weight'' is obtained by integrating $$\frac{({\mathcal{A}}-\tilde{\mathcal{A}})\tilde{h}}{\tilde{h}}=\sum_i(b_i-\tilde{b}_{i})\frac{\partial_i\tilde{h}}{\tilde{h}}+\frac{1}{2}\sum_{i,j} (a_{ij}-\tilde{a}_{ij})\frac{\partial^2_{ij}\tilde{h}}{\tilde{h}}$$ over $[0, t]$. 3. Eq. (11) and (12) are taken from Section 2 of Mider et al. Due to the space limitation, we can't include more details about 2 and 3 in this reply, but we are happy to proceed them in the conversation. **Experimental Designas or Analyses** * "Cell dynamics": Yes, this is true. Empirical results show multimodal marginal distributions emerge after $t=2.0s$ (Fig. 6), with neural-guided bridges evolving primarily unconditionally beforehand. Peak splitting coincides with intensified drift forcing toward $v$, matching guided proposal dynamics. Neither approach fully replicates the three unconditional sampling modes, though all three become distinct during $t=2.0–3.0s$ under weakening constraints – despite potential intermediate convergence failures toward $v$. **Supplementary Material** We acknowledge the importance of additional experimental results, and will prioritize space allocation for key figures and tables while maintaining methodological rigor during the revision of the manuscript. **Essential References Not Discussed** We agree that Proposition is known in the literature. However, we did not find another reference where this is spelled-out for our purposes in detail. We simply want to have one clean statement that explains the change-of-measure to $\mathbb{P}^\star$ for this particular $h$-function, in the setting of SDEs. Thanks for pointing out the ``non-essential recent work'': 1. Denker et al.'s diffusion framework (Proposition 2.2) shares our Proposition 3.2's $h$-function concept (termed "generalized $h$-transform"), though methodologically distinct through denoising score matching. We will add reference to contextualize prior appearances of this construct. 2. While our focus remains on KL divergence's mode-seeking properties, we acknowledge log-variance divergence's potential benefits for mode collapse mitigation. This comparison will be expanded in future research. 3. The adjoint matching method shows promising scalability for high-dimensional applications (images/point clouds). We plan to investigate adapting our objective function into this framework. **Other comments or suggestions** 1. (a) We agree that it is useful to the reader to remind them of the definition of $b^\circ$ in Eq. (6). There is no $\tilde{b}^\circ$ in our paper. (b) We agree with your suggestion and will add a remark at the end of Section 3.2 to explain this. Indeed, if $\sigma$ is invertible, then for $\tilde{X}_t = \sigma W_t$ transition densities $\tilde{p}$ are Gaussian and $$ \nabla_x \log \tilde{p}(t,x; T,v) = (\sigma\sigma^T)^{-1} \frac{v-x}{T-t}.$$ Therefore, the bridge has drift $(v-x)/(T-t)$. We added this to the text. 2. We will clarify this in the revision of the manuscript. The following hierarchical model is meant: $$ v\mid y\sim q(v\mid y), y\sim p(T,y \mid 0,x_0).$$ Here, $y$ is the parameter that gets the prior $p( T,y \mid 0,x_0)$ assigned; $v$ is the observation. In this model, the likelihood can be written as $L(y; v)=q(v\mid y)$. 3. We are unsure what is meant by the remark on $b(t,v)$. Regarding the reason it is useful for $d'<d$: we may only observe certain components of the diffusion, for example, if we have a 2 dimensional example and observe $v \sim N(x_1, \sigma^2)$, then the dimension of the observation is lower than that of the diffusion. The Fitzhugh-Nagumo example is an illustration of this. 4. Yes, we will stress this in the revision and clarify the explanation given in lines 347-355. Indeed, for the guided proposal we always use MCMC with pCN-steps, contrary to the other methods. 5. We think we have addressed this comment in the previous item.
Summary: This paper introduces Neural Guided Diffusion Bridges, which added variational inference to Guided Proposal. Neural-guided Diffusion Bridges show bridge sampling without MCMC. They perform competitively in many experiments. They can handle rare events, which is very hard for other methods. Though such good performances, it has some limitations; it could become mode-seeking in multimodality, it occasionally samples from only a single mode in FitzHugh-Nagumo model's rare event, etc. ## update after rebuttal The rebuttal clarified my concerns, and I continue to see this paper as leaning towards acceptance. Claims And Evidence: The claims are explicit using mathematical proof. Also, the claims are supported by appropriate experiments. Methods And Evaluation Criteria: The methods are explicit. In addition, the authors use various examples to demonstrate their methods' competitive performances in the experiments part. Theoretical Claims: I have one question for lines 271~272. How could the authors use 'local minimizer' and 'lower bound' at the same time? So, what I want to know is that, is there any condition, we could get a lower bound by choosing a local minimizer? Experimental Designs Or Analyses: In lines 402-405, the authors say, "The neural guided bridge samples paths from only one of the two modes, though the sampled paths appear very similar to the actual conditioned paths. " Questions 1. Could we choose which modes to sample among the two modes? 2. Is there any reason for that result? I want to know the authors' thoughts Supplementary Material: I checked the whole parts of the supplementary material, Theoretical details, and Experiment details. Relation To Broader Scientific Literature: The contributions could be utilized for generative modeling, especially sampling. Although sampling is very important for the generative models, it needs a hard cost for accurate sampling. If the contributions would be helpful to overcome those limitations, Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** This paper shows great math skills, resulting in neat development for the main idea. **Weaknesses** It would be better to show a realistic example, such as generated outputs(like images, video, and text) and metrics related to those outputs. Other Comments Or Suggestions: No Questions For Authors: Similar to **Experimental Designs or Analyses**, is there any proof or reasoning for the experiments' result? I see only results and explanations, except for proof or detailed reasoning about the experiments' result provided in the paper. It would be better if you could give proof or reasoning about the experiments' result. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Theoretical Claims** The formulation was imprecise and we propose to reformulate it as follows: > If $\theta_{\rm opt}$ is a local minimizer of $L$ and $L(\theta_{\rm opt})=-\log \frac{\tilde{h}(0,x_0)}{h(0,x_0)}$, then $\theta_{\mathrm{opt}}$ is a global minimizer. This implies from which we obtain $D_{KL}(\mathbb{P}^{\bullet}_{\theta_{\mathrm{opt}} | \mathbb{P}^{\star})=0$, from which we obtain $\mathbb{P}^{\bullet}_{\theta_{\mathrm{opt}}} = \mathbb{P}^{\star}$. **Experimental designs or analyses** We cannot explicitly choose the target mode to sample from. Theoretically, the variational class $\{\mathbb{P}^{\bullet}_{\theta};\theta\in\Theta\}$ is large enough to contain the target $\mathbb{P}^{\star}$, and the global optimal $\mathbb{P}^{\bullet}_{\theta_{\mathrm{opt}}} = \mathbb{P}^{\star}$ can be guaranteed. However practically, the inherent non-convexity of the objective function induced by $G(s, x)$ predisposes $\vartheta_{\theta}$ to converge to local minima, as empirically evidenced by sample concentration within individual modes. However, directing the optimization trajectory within this complex landscape remains nontrivial, and we currently do not have a reliable method steer the process towards preferred local minima. **Questions for Authors** We don't fully understand what you mean by "It would be better if you could give proof or reasoning about the experiments' result.". We kindly ask you to clarify so we can answer your question accurately. --- Rebuttal Comment 1.1: Comment: Reply for **Questions for Authors**: In "Experimental Designs or Analyses", I wrote "the neural guided bridge samples paths from only one of the two modes...". I would like to understand why these two modes appear. Note for **Weaknesses**: The point I raised in the weaknesses section is just a suggestion. While providing additional realistic examples could be helpful, I understand that it may not be easy within the rebuttal period. I believe that using tables or graphs, as you’ve already done, should be sufficient. --- Reply to Comment 1.1.1: Comment: Thank you for your clarification. Consider the deterministic part of **unconditioned** FHN model: $\frac{dX_{t,1}}{dt} = F_1(X_{t})=\frac{1}{\chi}(X_{t,1} - X_{t,2} + s - X^{3}_{t,1})$ $\frac{dX_{t,2}}{dt} = F_2(X_{t})=\gamma X_{t,1} + X_{t,2} + \alpha.$ We now look into their fixed points by inspecting the conditions for $\frac{dX_{t,1}}{dt} = \frac{dX_{t,2}}{dt} = 0$, leading to the equations: $X_1 - X_2 + s - X_1^3 = 0,$ $\gamma X_1 + X_2 + \alpha = 0$ Substituting $X_2$ obtains: $X_1^3 - (1 + \gamma)X_1 - (s + \alpha) = 0,$ which is a cubic equation, the root that solves this equation gives fixed points. The discriminant is $\Delta = 4(1+\gamma)^3 - 27(s+\alpha)^2$. Under the setting we considered in the paper, $[\chi, s, \gamma, \alpha, \sigma] = [0.1, 0, 1.5, 0.8, 0.3]$, $\Delta > 0$, suggesting there are three real roots. Explicitly, these three roots are $X_1=\\{1.72, -1.39, -0.34\\}$, and accordingly $X_2 =\\{-3.38, 1.28, -0.30\\}$, leading to three distinct fixed points. The stability of these points can be verified by inspecting the evaluations of the Jacobian $J$ of $[F_1, F_2]^T$ at $(X_1, X_2)$, which is defined by: $J = \begin{bmatrix} \frac{\partial F_1}{\partial X_{t,1}} & \frac{\partial F_1}{\partial X_{t,2}} \\\ \frac{\partial F_2}{\partial X_{t,1}} & \frac{\partial F_2}{\partial X_{t,2}} \end{bmatrix} = \begin{bmatrix} \frac{1-3X^2_{t,1}}{\chi} & -\frac{1}{\chi} \\\ \gamma & 1 \end{bmatrix},$ and stability requires $\rm{Tr}(J)<0$ and $\rm{Det}(J)>0$, which turns out that all these three points are **unstable**. However, when conditioning on observations, the drift shall change. Assuming this change is known, we can apply the previous analysis to identify the fixed points and assess their stability. If some fixed points are stable, the diffusion term—though previously neglected—introduces noise around these points and enables transitions between their basins of attraction. The number of fixed points reflects the number of modes. In practice, however, the drift induced by conditioning is not available in closed form. As a result, we can only state that, under the rare event considered, the conditioned process appears to have exactly two stable fixed points. An alternative approach is to numerically approximate the Kolmogorov forward or backward equations to study the transition densities, as these densities are also not available in closed form either. We hope this could address your question.
Summary: The paper presents a novel method for simulating conditioned diffusion processes, called diffusion bridges. This approach trains a neural network to approximate the bridge dynamics, providing a more robust and efficient alternative to traditional methods like MCMC or reverse-process modeling, particularly for rare events and multimodal distributions. By learning a flexible variational approximation of the diffusion bridge path measure, partly defined by a neural network, the method enables efficient independent sampling similar to simulating the unconditioned process. The paper validates this "neural-guided diffusion bridge" through various numerical experiments, comparing its performance against existing techniques in challenging scenarios. Claims And Evidence: yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: 1. Could you provide more details on the specific neural network architecture used to parameterize the drift correction term v(t,x) in the different experiments? For example, what was the rationale behind selecting the number of hidden layers, the dimensionality, and the activation functions? 2. For the "Stochastic landmark matching" task, since the true bridge is intractable, it’s challenging to make a quantitative comparison. Are there alternative metrics or qualitative analyses, beyond visual inspection, that could help assess the performance of the proposed method relative to the guided proposal in high-dimensional settings? Theoretical Claims: yes, all the theoretical claims made by the authors are correct. Experimental Designs Or Analyses: 1. The paper emphasizes that the proposed method learns directly from conditional samples, offering improved training efficiency compared to score-learning methods that rely on unconditional samples. Could you provide a more quantitative comparison of the training times and computational resources needed for your method versus the score-matching approach from (Heng et al., 2022) and the adjoint method from (Baker et al., 2024a) in one or more of the experimental settings? 2. The paper notes that the canonical score-matching loss involves inverting $\sigma\sigma^T$, which can be challenging for high-dimensional and hypo-elliptic diffusions. Could you elaborate on how your method avoids these particular challenges and how it may offer advantages in such scenarios? Supplementary Material: yes, I review the supplementary material. Relation To Broader Scientific Literature: The proposed method aims to improve the efficiency of guided proposal-based simulation by replacing computationally expensive MCMC/SMC steps with a learned neural network, enabling fast, independent sampling. It also offers an alternative to score-learning methods that learn directly from conditional information, potentially leading to better performance in challenging scenarios like rare events. Essential References Not Discussed: All relevant references are discussed. Other Strengths And Weaknesses: Strengths: Demonstrates robustness across different diffusion specifications and conditioning scenarios. Effective in handling rare events and multimodal distributions. Enables efficient independent sampling after training, with a cost similar to the unconditioned forward process. Scales well to relatively high-dimensional problems, outperforming MCMC-based methods. Weakness: Although the neural-guided bridge and the guided proposal demonstrated comparable performance, both methods could only capture part of the modes in a multimodal distribution resulting from specific initial conditions. The paper notes that, while the neural bridge might not recover all modes, it offers faster sampling as a trade-off. It also suggests that additional MCMC updates could be applied to the trained neural bridges for better-quality samples. The "FitzHugh-Nagumo model" experiment (Section 5.3), which involved a rare event, also highlighted a limitation in capturing all modes. Although the reference-conditioned process was bimodal, the neural-guided bridge only sampled paths from one of the two modes. Other Comments Or Suggestions: The paper is well-written and easy to follow. Questions For Authors: 1. In the Brownian bridge and Ornstein-Uhlenbeck bridge experiments, the lower bound of the loss function L(θ) could be computed analytically and used as a benchmark. For more complex, non-linear examples where this is not feasible, how do you ensure that the trained neural network is performing optimally and not converging to a suboptimal solution? 2. In Section 3.3, it is mentioned mention the importance of "matching conditions" for the linear auxiliary process, particularly for hypo-elliptic diffusions, to ensure P*≪ P. Could you elaborate on these conditions and explain how they were satisfied in the hypo-elliptic FitzHugh-Nagumo model discussed in Section 5.3? 3. In the "Multi-modality" experiment (Section 5.2), both the neural guided bridge and the guided proposal were able to cover only part of the modes. Considering the mode-seeking nature of variational inference, did you explore any strategies, or could you suggest approaches, to better capture the full multimodality of the conditioned process without relying on multiple MCMC chains? 4. Could you discuss the potential advantages or disadvantages of jointly learning $\vartheta_\theta$ and the parameters of the auxiliary linear process (e.g., $B$, $\beta$, $\sigmã$) using variational inference, as proposed for future work? What challenges do you anticipate in implementing this joint learning approach? 5. Section 6 suggests extending the approach to conditioning on partial observations at multiple future times. Could you outline the key modifications needed in your current methodology to accommodate such conditioning scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Methods and Evaluation Criteria** 1. All $\vartheta_{\theta}$ implementations use fully-connected networks with $(1+d)$-dimensional inputs (time $t$ and state $x$). Time integration varies by dimensionality: direct concatenation for low-dimensional systems (Brownian/OU/cell/FHN) versus sinusoidal time embeddings with feature-wise modulation (Perez et al.) in high-dimensional landmark processes. Architectures employ tanh activations to ensure Lipschitz continuity (critical for Assumption 4.1 via gradient clipping). Layer counts/hidden dimensions were determined via loss-minimizing parameter sweeps. 2. To our knowledge, no established quantitative metrics exist for high-dimensional performance evaluation. Only visual assessment by brute force unconditional sampling (cell and FHN)—prohibitively inefficient for rare events, are available. In contrast, real-world applications (e.g., image/video) permit rigorous benchmarking through domain-specific metrics like FID/IS scores used in translation tasks. **Experimental designs or analyses** 1. We took two representative examples to compare the computational cost of the methods considered in more detail in the following table. ||OU||Cell||| |-|-|-|-|-|-| |Methods|#Params|Time|#Params|Time|Complexity| |Adjoint Bridge|21969|93.53s|22114|2465.06s|$\mathcal{O}(d^3)$| |Score Matching|26353|335.96s|26498|1496.87s| $\mathcal{O}(d^3)$| |Neural Bridge|1341|29.21s|3362|186.02s|$\mathcal{O}(d^2)$| 2. Our proposed loss eliminates matrix inversion of $a=\sigma\sigma^T$. Precomputable terms $L(t)$, $M(t)$, and $\mu(t)$ are solved once before training, while $\tilde{r}(t,x)$ (Eq. 11b) and $G(t,x)$ (Eq. 10b) are computed during integration - all without requiring $a^{-1}$. This contrasts with the canonical score matching loss (Heng et al.) which contains $\Sigma(t,x)=a(t,x)$. Such inversions prove numerically unstable for near-singular $a$, and critically fail in our FitzHugh-Nagumo example where $a$ is singular. Our method remains applicable where score matching becomes undefined. To show more insight why the (neural) guided bridge works for hypo-elliptic diffusions, consider the SDE $$dY_t = \{b(t, Y_t) + \sigma(Y_t)f(t, Y_t)\}dt + \sigma(Y_t)dW_t,$$ and denote the law of $Y$ to be $\mathbb{Q}$. $\mathbb{P}^{\star}$ is absolutely continuous with respect to $\mathbb{Q}$ if there exists a bounded solution $\eta$ to the equation $$b^\star(t,x)- b(t,x)-\sigma(x) f(t,x) = \sigma(x) \eta(t,x).$$ Recall that $b^\star(t,x)=b(t,x) + \sigma(x) \sigma(x)^T \nabla_x \log h(t,x)$. Then the preceding display can be rewritten to $$\sigma(x) \left( \sigma(x)^T\nabla_x \log h(t,x) -f(t,x)\right) = \sigma(x) \eta(t,x).$$ Hence, one can easily see such an $\eta$ exists by the specific form of the additional drift. In this way, we circumvent inversion of $\sigma$. This also explains why the additional drift in the neural guided bridge contains the $\sigma$ premultiplication. **Questions for Authors** 1. We cannot ensure this, similarly to other applications of Variational Inference. But some heuristic methods can be helpful in our case. For example, we can initialize the neural bridge differently, and test whether it learns the same drift term. 2. This "matching condition" appeared first in Theorem 1 of Schauer et al. This paper deals with the fully observed, uniformly elliptic case. It says that $\tilde\sigma$ should be $\tilde{a}(T)=a(T,v)$, see also our discussion in Section 3.3 of our paper. The claim that all examples in Section 5 satisfy the matching conditions needs adjustment. It is true, except for the Fitzhugh-Nagumo (FHN) model. Thank you for drawing our attention to this: we adjusted the formulation. The matching condition specifically refers to Assumption 2.4 in Bierkens et al. It consists of verifying 4 inequalities. As the diffusivity is constant, the 4th of the these is trivially satisfied. The first and third assumptions can be verified similar to Example 3.2 in Bierkens et al. (with $\Delta(t)=(T-t)^{-1}$). The second assumption in there is on the difference $b(t,x) -\tilde{b}(t,x)$. Inspecting the proof, it suffices that the second assumption only needs to hold true for $t=T$ and those $x$ for which $Lx=v$. With our choice of $\tilde b(t, x)$, it reads $b(T,x) -\tilde{b}(T,x)=0$, therefore the second assumption also suffices. 3. Yes, please refer to our reply to Review ysG7's Question 5. 4. We now simply choose $B$ and $\beta$ in a rather simple way to ensure the neural bridge will end up in the right point. We don't see any direct complication from parametrising these functions by a neural net, apart from computational resources. Presently, it is unclear if the additional training time is worth the effort. 5. Guided proposals in the case of multiple partial observations are discussed in Mider et al. We would start from this approach, and simply add a drift term on each of the segments between observations times, as we propose in the present paper for only one segment.
Summary: This paper introduces a novel variational method for simulating conditioned diffusion processes (diffusion bridges) by proposing a more expressive guided proposal framework of Schauer et al. (2017) with a learnable drift correction term parameterized by a neural network. By leveraging variational inference, the proposed method overcomes key limitations in existing approaches: guided-proposal-based methods require a careful, non-trivial choice of an auxiliary process and rely on computationally intensive MCMC/SMC updates, while score-learning-based methods often struggle with inadequate exploration of rare event regions and the numerical challenges of inverting nearly singular diffusivity matrices. The method is validated on a diverse set of problems. The experiments, which include comparisons with state-of-the-art guided proposals, score matching, and adjoint-process methods, demonstrate that the neural guided bridge is more flexible and adaptable. Claims And Evidence: - The paper claims that its method can generate independent conditioned samples at a cost comparable to simulating the unconditioned process—thus avoiding computationally intensive MCMC or SMC updates—by employing a variational inference approach; **however, it does not provide a direct comparison of the neural network training time complexity with that of MCMC/SMC or score optimization methods.** - Extensive experiments in Section 5—including tests on the Brownian bridge, Ornstein-Uhlenbeck process, a cell diffusion model, the FitzHugh-Nagumo system, and stochastic landmark matching —offer both quantitative and qualitative evidence that the proposed method is competitively robust compared to guided proposals, score matching, and adjoint-process approaches. Methods And Evaluation Criteria: While I appreciate the thorough evaluation of the proposed method, I encourage the authors to provide a direct comparison of the neural network training time complexity with that of MCMC/SMC or score optimization methods. Theoretical Claims: The main contribution of the paper is empirical where many ideas are borrowed from (and referred to) Bierkens et al. (2020). I have skimmed through the derivations in the appendix and they look correct to me. Experimental Designs Or Analyses: The experimental design is comprehensive and well-structured, as it evaluates the proposed method across a variety of diffusion bridge scenarios—from simple one-dimensional cases (Brownian and Ornstein-Uhlenbeck bridges) to more complex, nonlinear, and high-dimensional problems such as cell diffusion, the FitzHugh-Nagumo model, and stochastic landmark matching. The authors complement qualitative visual comparisons with quantitative analyses (e.g., tracking training loss curves and comparing empirical distributions against known lower bounds in simpler cases), which provides a strong basis for assessing the method's performance relative to guided proposals, score matching, and adjoint-process approaches. Supplementary Material: I went through the empirical results presented in the supplementary material and skimmed through the derivations provided at the beginning. Relation To Broader Scientific Literature: The paper highlights several challenges with current methods for simulating diffusion bridges. In Guided-Proposal-Based Methods, the authors note that a careful, often complex, choice of an auxiliary process is required. Moreover, when these methods are combined with MCMC or SMC updates, the computational cost can become prohibitive, especially for strongly nonlinear or high-dimensional diffusions. Whereas, Score-Learning-Based Methods, the other popular approach, rely on samples from the unconditioned process, which often do not adequately cover regions corresponding to rare events, leading to suboptimal performance. Additionally, the canonical loss function in these methods involves inverting the matrix $\sigma\sigma^\top$, a task that is particularly challenging for hypo-elliptic and high-dimensional diffusions where the matrix may be nearly singular, thus complicating stable and accurate optimization. Furthermore, the literature distinguishes these problems from related areas such as the diffusion Schrödinger bridge and neural SDEs, where the focus is on connecting fixed marginal distributions or modeling entire data trajectories under stochastic dynamics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The idea presented in the paper is novel and innovative and is particularly relevant as it addresses key challenges in existing methods. Given the growing interest in diffusion-based models across machine learning community, the proposed method has significant practical relevance for applications requiring efficient sampling of conditioned stochastic processes. Other Comments Or Suggestions: Minor L166- fix the Typo Questions For Authors: 1. How crucial is the assumption that the stochastic process X has smooth transition densities for the validity of your theoretical results, and what would be the impact of relaxing this assumption to discrete densities? 2. Why is the non-noisy case emphasized in the paper? The authors primarily discuss scenarios in their experiments where observations are noise-free, but I would appreciate insights into how their approach performs when observations contain noise. 4. Section 3.4:- Why did the authors opt to add an additional learnable term to the drift rather than attempting to learn the entire drift function, and do they have empirical evidence to support the claim that the guided proposal’s sample paths significantly deviate from the true conditioned paths? 5. Is it possible to use drift correction in existing approaches such as score optimization based methods? If yes, can the authors discuss the challenges associated with using drift correction in existing approaches. 6. Given that mode collapse is a well-known challenge for such problems, did you observe any instances of mode collapse during your optimization? If so, could the authors elaborate on the nature of these occurrences and describe the specific strategies or modifications that they implemented to mitigate mode collapse? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Claims and Evidence / Methods and Evaluation Criteria** * While neural bridges and MCMC-guided proposals differ methodologically—complicating direct cost comparisons—their forward simulation costs are comparable: for example, in the landmark process ($d=100$), the forward simulation time is 9.81ms (neural) vs 6.85ms (guided). Training complexities diverge as $O(Nd^3)$ (score-matching/adjoint) vs $O(N^2d^2)$ (neural), where $N$ is the number of time steps. Due to the space limitation, we can not include benchmark results here, but please refer to our reply to Reviewer 8sR1, and we also have added a more comprehensive summary into the manuscript per suggestion. **Other Comments or Suggestions** We have fixed the equation, thank you! **Questions for authors** 1. We are unsure what you mean by ``discrete densities''. Proposition 3.2 requires $\nabla_x \log h(s,x)$ to be well defined. The simplest way to ensure this is to assume existence of smooth transition densities. Note however that we can write $h(t,x)= \int q(v\mid y) P(T, d y\mid t,x)$, where $P(T, d y\mid t,x)$ is the Markov kernel. This expression also makes sense if the kernel does not admit (smooth) densities with respect to some dominating measure. Informally, the case of conditioning on a state without noise corresponds to taking $q(v\mid y)$ like a Dirac mass in $v$ and then the above display should be interpreted as $ h(t,x) = p(T,v\mid t,x).$ Clearly, existence of $\nabla_x \log h(s,x)$ requries that $h$ is strictly positive and that its gradient exists. Throughout the paper we have assumed that the distribution of $v$ conditional on $y$ is Gaussian. It is however possible to relax this assumption. 2. The non-noisy case is most challenging. Intuitively, the larger the noise, the less the process needs to be guided in a certain direction. So we tested the approaches on the more difficult case. We formulated Proposition 3.2 deliberately for the non-noisy case; in case we condition on the full state without noise, we can simply take $h(t,x) = p(T,v\mid t,x)$. However, in case of a partial observation, such as with the FitzHugh-Nagumo model, the form of $h$ reads more pleasantly when noise is assumed. We refer to Section 1.3.2. of Bierkens et al. for the somewhat more involved formulas that appear when no noise is assumed. So the motivation is merely to present a ``clean'' statement. In the examples, we assume $q(v\mid y) = \psi(v; L y, \epsilon^2 I)$, where $\epsilon$ is very small. For example, in the FHN-example $L=[1, 0]$, which corresponds to only observing the first component. Taking $\epsilon$ nonzero makes guided proposals also numerically better behaved near the time of conditioning. 3. The drift of the bridge behaves in a very specific way near the point we condition on. This is most easily seen in the uniformly elliptic case and when we condition on the full state $v$ at time $T$. Essentially, the drift of the true bridge behaves for $t\approx T$ as $(v-x)/(T-t)$. Failure to replicate this behaviour breaks absolute continuity between true and proposal bridge laws. Our guided proposal network explicitly replicates this asymptotic behavior. The neural bridge's auxiliary drift (confined to $\sigma$'s range and bounded) enhances proposals on $[0, T-\eta)$ for small $\eta$, preserving absolute continuity when added. Empirical validation appears as Fig. 6 in Bierkens et al. 4. In general yes. The additional learned drift in our method is the difference between the true score $\nabla_x\log h(s, x)$ and the proposed score $\nabla_x\log \tilde{h}(s, x)$ (with $\sigma$ as a scaling). The objective function is based on a closed-form expression for the likelihood ratio between the target measure $\mathbb{P}^{\star}$ and the proposal measure $\mathbb{P}^{\circ}$ (As Eq. (10) in the paper). The key challenges in applying our method are to select $\mathbb{P}^{\circ}$ that satisfies two criteria: the likelihood ratio between the target $\mathbb{P}^{\star}$ and $\mathbb{P}^{\circ}$ is tractable, and sampling paths under $\mathbb{P}^{\circ}$ can be done efficiently. 5. Yes, mode collapse occurs in both cell diffusion and FitzHugh-Nagumo (FHN) experiments, showing multimodal marginal distributions. This stems from the forward KL divergence's mode-seeking objective. We've also considered reverse KL alternatives, but it will introduce problematic stochastic integrals and unstable optimization. Additionally, we see three possible mode collapse explanations: (i) local optimum convergence; (ii) misspecified endpoint guidance from $\tilde X$; (iii) Oversimplified $\vartheta$. We conjecture (i) to be most likely. We implemented strategies: (i) optimizer/hyperparameter sweeps (Adam/RMSprop/SGD); (ii) dropout regularization. Neither approach yielded significant improvements. We hypothesize that the presence of $G(t,x)$ in the objective function introduces significant non-convexity, rendering the optimization landscape inherently challenging.
null
null
null
null
null
null
Learning to Incentivize in Repeated Principal-Agent Problems with Adversarial Agent Arrivals
Accept (poster)
Summary: This work explores sequential incentive design in a repeated principal-agent problem with adversarially ordered agents. The principal faces $K \geq 2$ agent types (unknown) and selects incentives for one of $N$ arms to influence agent decisions, which are made based on both intrinsic utility and offered incentives. The goal is to minimize regret relative to the optimal ex-post incentive strategy. The contributions of this paper are: 1. Demonstrate that algorithms lacking prior knowledge of agent behaviors (e.g., best-response functions) inevitably incur linear regret in greedy-response, single-arm incentive settings. 2. Propose a reduction-based approach for adversarial linear bandits by discretizing the continuous incentive space. Achieved an upper bound of O(min{ $ \sqrt{KT log(KN)}, K\sqrt{T}$ }) for greedy-response agents with known types. 3. Introduce a novel polytope discretization for large incentive spaces, enabling $O\left(K\sqrt{T}\right)$ regret when incentivizing multiple arms. 4. Design an algorithm achieving $\tilde{O}\left(L^{1/3}N^{1/3}T^{2/3}\right)$ regret for Lipschitz-smooth agent responses. This study pioneers the analysis of adaptive incentive mechanisms against adversarial agent arrivals, establishing tight regret bounds under assumptions and addressing exploration-exploitation trade-offs in dynamic incentive allocation. Claims And Evidence: In lines 192-196, the authors claim that to achieve sub-linear regret, the method must exactly learn $\Delta$. In this paper, $\Delta \in [0.7, 0.71]$. Does this imply that achieving sub-linear regret is difficult only when $\Delta$ is relatively small? Methods And Evaluation Criteria: The method makes sense, and there are no evaluation datasets. Theoretical Claims: I have checked the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper focuses on repeated principal-multi-agent problems, which have been extensively studied. The difference lies in the fact that this paper emphasizes agents arriving in an adversarial order, whereas previous work assumes that agents arrive in a fixed distributional order. Essential References Not Discussed: None. Other Strengths And Weaknesses: **Strengths** 1. This paper provides a detailed discussion on the upper and lower bounds of the repeated principal-agent problem under different agent behaviors and incentive strategies. 2. This article has a clear logical structure and rigorous proofs for the theorems. 3. This paper innovatively defines a polytope to address the issue of an excessively large incentive space, using the extreme points of the polytope to determine the optimal incentive. Other Comments Or Suggestions: 1. In line 149, " $L \geq$ " is incomplete. Questions For Authors: In Section 2.2, the arms are mapped to {0, 1}, and the rewards are approximately set to 0.5. Additionally, in Section 3.1, the assumptions regarding tie-breaking have been simplified. If the discretization were to be closer to the real setting and more complex, would it affect the derivation of the upper and lower bounds? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We now address their concerns. **Typo regarding $L$:** Thanks for pointing out this typo. This should be $L \ge 1$. **Regarding the parameter $\Delta$ in our linear-regret lower bound:** In the principal-agent problem, the incentives provided to any arm lie within the range $[0,1]$. In our lower bound instance, the optimal incentive to provide to arm $1$ is $\Delta$; any deviation above or below this value results in linear regret for the algorithm. We select $\Delta \in [0.7, 0.71]$, which is relatively large on the $[0,1]$ scale. **Response to reviewer's question:** Our upper bound applies to all scenarios; the tie-breaking assumption is made only for simplicity of presentation. We have addressed all possible tie-breaking scenarios in the appendix. In contrast, our lower bound is a worst-case bound. Thus, analogous to multi-armed bandit problems, establishing instance-dependent upper and lower bounds in this setting would be an interesting direction for future research.
Summary: Building upon the literature on repeated principal-agent games, this paper explores a setup where a principal recommends an action from a bandit instance to an agent and offers a payment so the agent is incentivized to follow the recommendation. Two cases are studied: first when the agent greedily chooses her utility and second, when the agent chooses her action with a smooth choice model. The difference with the state-of-the-art here is that the agent's type varies over type and is unobserved to the principal. To stick with the contract theory terminology, it is a setup with observed action and unknown type (i.e. adverse selection). The adversarial arrival of agents of different types makes it necessary to introduce a new definition of regret and to use a discretization of the incentives' space before linking the problem to adversarial bandits. Claims And Evidence: Yes. Methods And Evaluation Criteria: There are no experimental evaluation criteria beyond a theoretical regret bound. It is not an issue to avoid experimental evaluation to me, since the paper is mostly about theoretical issues (as is the literature in this field). Theoretical Claims: All the claims of the paper are supported by clear proofs. Experimental Designs Or Analyses: There are no experiments in this paper. Supplementary Material: I read the proofs, which represent 100% of the supplementary material. Relation To Broader Scientific Literature: The paper provides a consistent literature review in two areas: principal-agents problems (an economics' field) and online learning. Relying on recent works that consider repeated principal-agent problems (typically Dogan et al. Repeated principal-agent games with unobserved agent rewards and perfect-knowledge agents, Scheid et al., Incentivized learning in principal-agent bandit games, and then Ben-Porat et al., Principal-Agent Reward Shaping in MDPs which considers a MDP setup, among other extensions), the paper introduces a new complexity to the problem which is the adversarial agents arrivals (with different types). To tackle it, the paper introduces a regret definition very close to the one defined in Zhu et al., The sample complexity of online contract design (this paper studies a repeated contract design setting with unobserved action and unknown types) and uses a discretization of the incentives' space a bit similar. While the discussion on the relation with the [repeated] principal-agent literature seem consistent to me, a discussion with the latter reference might be interesting. Essential References Not Discussed: NA. Other Strengths And Weaknesses: I really appreciate the setup: it seems to interesting to me to account for adversarial agents' types. Also, I enjoy the technical approach which consists in first discretizing the incentives' space and then running an adversarial bandit algorithm. The work is nicely supported by a lot of lower bounds provided. I appreciate having an outline of the important proofs in the main text and the full proof in the appendix. From my reading, the regret upper bounds in page 8 for Instance-dependent Algorithm for Single-Arm/General Incentives are stated without a theorem nor a clear proof. I think that the paper would benefit from a clear presentation of the algorithm used in that setup (including the application of the Zooming algorithm), a formal theorem as well as proofs. The employed tools seem very interesting but the way they are presented in the current version is definitely too brief. Other Comments Or Suggestions: I would have appreciated a clear statement of the proposed algorithms. I believe that it would improve readability. Questions For Authors: Could you discuss a bit more the relation between the technical issues of your approach as compared to Zhu et al., The sample complexity of online contract design? Especially the way of discretizing the contract/incentives space. Is the definition of regret that you give common in the unknown opponent principal-agent literature? Or in contract design more generally. Do you believe that tackling the extension where the principal only observes noisy rewards is doable or close in the analysis? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We now address their concerns. **On instance-dependent Algorithm for the smooth setting:** For the smooth setting, after establishing the minimax regret bounds, we directly apply the Zooming algorithm from [1] in Section 4.3 of our paper. The instance-dependent regret bound then follows from Theorem 3.1 in [1], which characterizes the regret in terms of the zooming dimension. We agree that a clearer presentation would benefit the reader; thus, if space permits, we will include a pseudocode outline of the Zooming algorithm along with a precise statement of the associated regret bound, citing Theorem 3.1 of [1] for completeness. [1] Podimata, C. and Slivkins, A. Adaptive Discretization for Adversarial Lipschitz Bandits. In *Proceedings of the Thirty-Fourth Conference on Learning Theory*, vol. 134, pp. 3788–3805, 2021. **Comparison with Zhu et al.** Since both papers consider settings where the agent best responds to the incentive, a key technical challenge in both is that the principal’s utility function is not Lipschitz continuous with respect to the incentive. That is, small changes in incentives can lead to abrupt shifts in agent behavior. However, the approaches for handling this non-smoothness differ significantly. Zhu et al. (2022) identify directions in the incentive space along which the utility function is continuous and construct a discretization based on spherical codes to cover these directions. In contrast, our approach leverages structural knowledge of agent behavior: for any given incentive, we know exactly which arm each agent type will choose. This allows for a more tailored discretization strategy. For instance, in the single-arm incentive setting, we enumerate threshold points across agent types and prune the incentive set by retaining only the most rewarding vectors, making the discretization independent of $N$. In the general incentive setting, we again use the knowledge of agents’ best responses to characterize the incentive space as a polytope, and the algorithm only needs to consider the extreme points of that polytope. **On the Prevalence of the Regret Definition in the literature:** As we highlight in the paper, our work initiates a new problem setting that combines elements of the principal-agent framework with adversarially ordered arrivals. In this setting, the regret notion we adopt arises naturally, and closely related definitions have been studied in the context of Stackelberg games. For instance, the following works—one of which was also cited by Reviewer QVD9—explicitly consider this form of regret: *References* 1. Harris et al. Regret Minimization in Stackelberg Games with Side Information, NeurIPS 2024. 2. Balcan, Maria-Florina, et al. Commitment without regrets: Online learning in Stackelberg security games. Proceedings of the Sixteenth ACM Conference on Economics and Computation, 2015. Although this exact regret formulation may not have been explored in the contract design literature to our knowledge, we believe it is a natural and principled objective in our setting. As such, it may help inspire new directions in principal-agent and contract-design problems under adversarial settings. **On the case when the principal receives only noisy feedback:** Under stochastic bandit feedback, our upper-bound results for single-arm incentives under the greedy model extend naturally to the case where the principal’s rewards are unknown, as follows. We retain the current discretization of the incentive vector space and run Tsallis-INF over these discretized points, incurring a regret of $\sqrt{KNT}$. However, in contrast to the known principal-reward setting, the dependence on $N$ cannot be improved to achieve a regret bound of $\min\left(\sqrt{KT\log N}, K\sqrt{T}\right)$. This is because an $N$-armed stochastic multi-armed bandit problem can easily be reduced to our principal-agent problem, and it is known that an $N$-armed stochastic multi-armed bandit problem has a regret lower bound of $\Omega\left(\sqrt{NT}\right)$. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed and interesting answers to my questions and I thus increase my grade.
Summary: This paper studies a repeated principal-agent game, where the principal delegates their action to $K$ agents, each choosing from $N$ actions, with agent types ($i_t \in [K]$) assigned adversarially in each round. First, they show that achieving no-regret requires the principal to have prior knowledge of the agents' behavior—specifically, access to their best response functions—when agents act greedily (maximizing their utility in each round). Under this greedy action model, the paper presents an algorithm with $O(\min(\sqrt{KT\log N}, K\sqrt{T}))$ regret (in the single-arm incentive case) and proves a matching lower bound up to an $O(\log K)$ factor. They then extend their results to a smoothed action model, where agents choose arms probabilistically based on an unknown distribution that varies smoothly with the incentive vector (the principal's payout vector for each arm). In this smoothed model and the single-arm incentive case, they provide a no-regret algorithm and a matching lower bound. Claims And Evidence: This paper is mainly theoretical and its claims are supported by proofs. Methods And Evaluation Criteria: N/A Theoretical Claims: I found no specific issues with their proofs, particularly the main upper bounds in Theorem 3.1, Theorem 3.2, and the unnamed no-regret result in Section 4.1. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper contributes to the literature on learning in repeated principal-agent games, particularly in the setting where agent types arrive adversarially, providing near-optimal algorithms for this scenario. Prior works have primarily considered settings where the principal repeatedly contracts with a fixed but unknown agent type (Scheid et al., 2024b; Dogan et al., 2023a,b) or where the agent's type is drawn stochastically from a fixed distribution (Ho et al., 2014; Gayle & Miller, 2015). While some prior settings have advantages—e.g., Scheid et al. study the case where the principal's reward vector is drawn from an unknown distribution, whereas this work assumes a known reward vector—this paper's adversarial arrival model is relatively novel (but not completely new, see the section below) and warrants further study. Essential References Not Discussed: There is a missing prior work, Harris et al. (2024), which studies a very similar problem setting. See the weaknesses section for details. Other Strengths And Weaknesses: **Strengths:** - This paper studies a strong setting in the repeated principal-agent game where the agent's type is chosen adversarially. - It provides a solid negative result, motivating the necessity for the principal to know each agent's best response rule. - For their algorithms, they establish nearly matching lower bounds. **Weaknesses:** - (Minor weakness) Lack of concrete applications for the studied problem setting. - The setting of this paper appears to be subsumed by that of Harris et al. (2024), which studies a general principal-agent (Stackelberg) game with side information, adversarial agent-type arrivals, and bandit feedback (where the principal observes only the agent's chosen action). While Harris et al. achieve a regret bound of $O(T^{2/3})$ in this setting, worse than that of this work (Note that, a concurrent result by Balcan et al. (2025) improves this to $O(T^{1/2})$), as their setting includes side information, it inherently considers a stronger regret benchmark. Given the similarities in problem formulation, a comprehensive comparison between this work and Harris et al. seems necessary. **Reference** - Harris et al. Regret Minimization in Stackelberg Games with Side Information, *NeurIPS* 2024. - Balcan et al. Nearly-Optimal Bandit Learning in Stackelberg Games with Side Information *arXiv:2502.00204* 2025. Other Comments Or Suggestions: - Table 1 could include pointers to the corresponding theorem numbers to improve readability. - A proof sketch or illustration of the main idea or strategy in the main text would be very helpful. - Formally stating the regret upper bound for the smoothed case as a theorem would enhance the paper's coherence, readability, and navigability. Questions For Authors: Is it possible to extend the result to the case where the principal's rewards are unknown, but the principal receives only bandit feedback? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We now address their concerns. **On Presentation suggestions:** We thank the reviewer for the helpful suggestions regarding presentation. We will carefully incorporate them to improve clarity and readability in the next version. **Comparison with Harris et al.** We thank the reviewer for referring us to the recent NeurIPS paper by Harris et al. (2024) and the follow up concurrent arXiv preprint by Balcan et al. (2025). While these works share similarities with our framework—such as a principal-agent setup (which corresponds to the leader-follower formulation in their paper), with unknown agent types appearing each round—their settings differ in several important ways that prevent them from subsuming our problem: 1. *Action Space*: In our setting, the principal chooses an incentive vector from the hypercube $[0,1]^N$. In contrast, in the work by Harris et al., the leader selects a mixed strategy from a probability simplex defined over a finite set of actions $\mathcal{A}$. 2. *Agent/Follower Rewards*: In our model, if agent $j$ selects arm $i$, they obtain a reward of $\mu_i^j + \pi_i$, where $\pi$ is the incentive vector chosen by the principal. Conversely, in the leader-follower mode of Harris et al., the follower's reward for choosing action $a_f$ is $\sum_{a_\ell \in \mathcal{A}} x[a_\ell] u_j(z, a_\ell, a_f)$, where $x$ is the mixed strategy chosen by the leader and $z$ represents contextual information. 3. *Principal/Leader Rewards*: In our setting, if arm $i$ is chosen, the principal’s reward is $v_i - \pi_i$. On the other hand, in Harris et al., if the follower chooses action $a_f$, the leader’s reward is $\sum_{a_\ell \in \mathcal{A}} x[a_\ell] u(z, a_\ell, a_f)$, where $x$ is the mixed strategy chosen by the leader and $z$ is contextual information. These distinctions—particularly in the action spaces and the reward structures—mean that the Stackelberg game setting of Harris et al. (2024) does not subsume our principal-agent formulation. Nonetheless, we will clarify this comparison in our paper and emphasize that extending our principal-agent framework to a contextual setting, akin to theirs, would be a promising direction for future work. **On concrete applications of our problem setting:** While we briefly discussed motivating examples in the introduction, we are happy to expand on them here to further clarify how the studied problems apply to real-world scenarios. *Adversarial arrival of agents.* In practice, agent arrivals often deviate from fixed stochastic patterns due to factors like non-stationarity, strategic behavior, or external influences. For instance, in online shopping, discount ads are shown to all users, but only some choose to act—often in response to timing, personal context, or even social trends. The sequence of users who respond may not follow any stable distribution. Modeling arrivals adversarially allows us to account for such unpredictable and potentially strategic participation without assuming a specific arriving distribution. *Agent response models.* For additional motivating examples of the best response and smooth response models, please refer to our detailed response to Reviewer LMoE. **On the case when the principal receives only bandit feedback:** Under stochastic bandit feedback, our upper-bound results for single-arm incentives under the greedy model extend naturally to the case where the principal’s rewards are unknown, as follows. We retain the current discretization of the incentive vector space and run Tsallis-INF over these discretized points, incurring a regret of $\sqrt{KNT}$. However, in contrast to the known principal-reward setting, the dependence on $N$ cannot be improved to achieve a regret bound of $\min\left(\sqrt{KT\log N}, K\sqrt{T}\right)$. This is because an $N$-armed stochastic multi-armed bandit problem can easily be reduced to our principal-agent problem, and it is known that an $N$-armed stochastic multi-armed bandit problem has a regret lower bound of $\Omega\left(\sqrt{NT}\right)$. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which clarifies the relationship between this paper and the prior/concurrent works. I recommend including brief pointers to the relevant references, along the lines of what was outlined in the rebuttal, in the revision. I have raised my score based on your response.
Summary: The paper introduces a repeated principal-agent setting where agents arrive in an adversarial fashion. The principal interacts with agents of unknown types by strategically offering incentives to influence their decisions. The paper proposes algorithms with sublinear regret bounds under two key settings: (1) when the principal knows the best response of each agent type and (2) when agent decisions vary smoothly with incentives. The authors also present matching lower bounds for both settings and extend the results to cases where the principal can incentivize multiple arms simultaneously. ## update after rebuttal I thank authors for their detailed responses. The paper made reasonable amount of advancement in the existing literature, but I still do not see any noteworthy technical breakthrough here. Hence, I am keeping my original score. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: I only read through the proof sketch Experimental Designs Or Analyses: n/a Supplementary Material: I only check the lower bound construction Relation To Broader Scientific Literature: The adversarial setup is a meaningful addition to the existing literature in repeated principal-agent problem. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper begins by formalizing the hard cases for the repeated principal-agent problems in general and then introduces natural conditions where it is possible to design no-regret learning algorithm. Overall, it is a nice complement to the existing literature on repeated principal-agent problem. The paper is overall well-written. That said, I do not see any technical breakthrough here, as the designs and analysis of these algorithms are well-expected. Other Comments Or Suggestions: n/a Questions For Authors: Can you provide some real-world motivation on the two models you consider? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We now address their concerns. **On technical breakthrough:** While we respect the reviewer’s opinion, we believe our work includes notable technical breakthroughs. We develop novel lower bound techniques tailored to the greedy model setting—methods that, to the best of our knowledge, are new and have the potential to extend to other related problems like regret minimization in linear bandits and stackelberg games. Additionally, we develop a new discretization approach and provide a reduction to linear bandits, which together yield near-optimal regret bounds. To the best of our knowledge, this reduction strategy is novel and has not been explored in prior work. **Real world motivation of the two models:** While we briefly discussed motivating examples in the introduction, we are happy to expand on them here to further clarify the relevance of the two models to real-world scenarios. *Best response model.* A natural example arises in online shopping, where customers make purchase decisions based on visible discounts (i.e., incentives). Customers often wait until a discount reaches a historical low or crosses a personal threshold before making a purchase (Gunadi & Evangelidis, 2022). Similarly, in online labor markets, crowdworkers frequently accept tasks only if the offered payment exceeds a minimum expected amount (Horton & Chitoni, 2010). These behaviors reflect a threshold-based decision process, motivating our use of the best response model. In this model, the agent plays an arm only when the incentive on that arm exceeds a predefined threshold. *Smooth response model.* In contrast, consider routine purchases such as daily necessities. Here, small changes in discounts do not lead to abrupt changes in behavior but instead gradually affect the probability of purchase (Bijmolt et al., 2005). A similar pattern is observed in ad click-through behavior, where users’ likelihood of clicking on an ad increases with the attractiveness of the offer—such as a better discount or a more personalized promotion—but does so in a smooth, probabilistic manner rather than through sharp thresholds (Bleier & Eisenbeiss, 2015). These scenarios motivate the need for a more flexible response model in which decisions vary smoothly with the incentive. *References*: Gunadi, M. P., & Evangelidis, I. (2022). The impact of historical price information on purchase deferral. Journal of Marketing Research, 59(3), 623–640. Horton, J. J., & Chilton, L. B. (2010). The labor economics of paid crowdsourcing. In Proceedings of the 11th ACM conference on Electronic commerce (pp. 209-218). Bijmolt, T. H., Van Heerde, H. J., & Pieters, R. G. (2005). New empirical generalizations on the determinants of price elasticity. Journal of marketing research, 42(2), 141-156. Bleier, A., & Eisenbeiss, M. (2015). Personalized online advertising effectiveness: The interplay of what, when, and where. Marketing Science, 34(5), 669-688.
null
null
null
null
null
null
Offline-to-Online Reinforcement Learning with Classifier-Free Diffusion Generation
Accept (poster)
Summary: The paper proposes CFDG, a novel method of generative data augmentation for offline-to-online RL. The paper points out that offline and online data have different characteristics that are important for high performance. To this end, CFDG employs a conditional diffusion model where the condition is a binary value: offline / online. Experiment results validate that it achieves competitive performance in various tasks. Claims And Evidence: The main claim is straightforward: we need to use a conditional diffusion model to augment the dataset for the offline-to-online RL problem. The claim is well supported by the data distribution analysis section 3.1. Methods And Evaluation Criteria: The authors follow the conventional experiment settings of offline-to-online RL. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: The authors follow the conventional experiment settings of offline-to-online RL. Supplementary Material: I read the appendix of the paper to understand some details of each procedure. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N\A Other Strengths And Weaknesses: Please see above Other Comments Or Suggestions: There are several comments and suggestions listed below. 1. As the key motivation is that there are some different characteristics between the offline and online datasets, why don’t we train separate diffusion models to augment each of them? As there are only two classes, it seems that the cost of training two models is not expensive. I recommend that the authors compare the results with the independent diffusion model. 2. Experiments are conducted on Lomocotion and Antmaze tasks. I recommend that the authors conduct experiments on more realistic tasks such as Adroit. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! Below, we provide a detailed response to your questions and comments. If any of our responses fail to sufficiently address your concerns, please inform us, and we will promptly follow up. **[W1] Training Separate Diffusion Models for Offline and Online Data** We conducted experiments on locomotion tasks (halfcheetah, hopper, and walker2d) where we trained separate diffusion models for offline and online data. The results show that our CFDG approach outperforms training two independent diffusion models. | Dataset | Independent | CFDG | | ----------- | :---------: | :--------: | | Halfcheetah | 81.02±1.84 | 84.44±2.15 | | Hopper | 66.23±8.32 | 68.22±4.74 | | Walker2d | 75.89±7.13 | 93.65±6.00 | All results are averaged over the four datasets and are assessed across 5 random seeds. The key reason for this improvement is that classifier-free guidance enables the generation of data that is better aligned with the online policy. As demonstrated in Section 4.2 of our paper, conditioning the diffusion model on both offline and online data labels allows for more effective guidance, reducing redundancy between newly generated online data and existing offline data. This ultimately enhances the quality of online data, leading to better performance. While the computational cost of training two separate models is not prohibitive in terms of GPU resources, it doubles the training time of diffusion model, making it less efficient without yielding better results. **[W2] Experiments on More Realistic Tasks** To further validate our approach, we conducted additional experiments on Adroit tasks, which are considered more challenging due to their high-dimensional action space and dexterous control requirements. The specific results are shown here. | Dataset | Base | EDIS | CFDG | | ----------------- | :-------: | :-------: | :-------: | | relocate-human-v1 | 0.4±0.4 | 0.2±0.6 | 1.5±1.8 | | pen-human-v1 | 72.2±64.0 | 73.4±10.3 | 96.8±59.4 | | door-human-v1 | 4.5±7.9 | 6.20±3.2 | 31.7±6.5 | | **Average** | 25.7 | 26.6 | 43.3 | Our results show that CFDG achieves over 50% improvement over baselines and the model-based EDIS [1] approach on Adroit tasks, demonstrating its effectiveness beyond locomotion and AntMaze tasks. [1] Energy-guided diffusion sampling for offline-to-online reinforcement learning. --- Rebuttal Comment 1.1: Comment: (Sorry, I send an official comment and find that it is not visible to the authors...) Thank you for conducting additional experiments. I have some additional comments below: **[W1] Training Separate Diffusion Models for Offline and Online Data** The results are clear. I recommend that the authors add t-SNE visualization of independent training of diffusion models in the final manuscript. **[W2] Experiments on More Realistic Tasks** The results are quite surprising! By the way, could the authors explain more details about the experiment setting algorithms, such as base algorithms? When I checked the original EDIS paper, it seems that there exists some performance gap. ---- It seems that I cannot add a reply to your response...I modify the initial response for a reply. Thank you for your additional experiments. I just updated the score. While the idea is too simple, and when I see the t-SNE plot, it seems that there is no big difference compared to independent training, the idea achieves robust performance improvement on various benchmarks. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful feedback. **[W1] Training Separate Diffusion Models for Offline and Online Data**: We agree with the suggestion and will include the visualization in the final manuscript. For reference, we have also provided an anonymous comparison link here: https://anonymous.4open.science/r/icml-test-C4AC/compare.pdf. In our observation, training separate diffusion models for offline and online data may lead to overlapping samples between the two distributions. Intuitively, by incorporating both offline and online data as separate class labels during joint training, the diffusion model can better distinguish between them, reducing redundancy and enhancing the quality of generated online data. **[W2] Experiments on More Realistic Tasks**: Our base algorithm in the Adroit setting is IQL. We also noticed the performance gap mentioned in the EDIS paper. Specifically, in some locomotion tasks like HalfCheetah, our reproduction yielded better results, while in Adroit tasks, the reproduced performance was suboptimal. This discrepancy may stem from potential modifications in experimental settings made by the original authors that were not fully disclosed. To ensure consistency, we used the official EDIS codebase from https://github.com/liuxhym/EDIS and followed their default parameter configurations.
Summary: Offline-to-online Reinforcement Learning (O2O RL) aims to perform online fine-tuning on an offline pre-trained policy to minimize costly online interactions. To this end, existing work used offline datasets to augment online data. However, a distribution gap exists between the generated data and the online data, limiting overall performance. Hence, the authors propose a new data augmentation approach, Classifier-Free Diffusion Generation (CFDG). By leveraging classifier-free generation developed in diffusion models, CFDG enhances the generation quality of offline and online data. It also employs a reweighting method to better align generated data with the online data. Experiments validate these claims in the widely used D4RL benchmark. Claims And Evidence: Experimental results support the author's claims but experiments in more challenging environments, e.g., manipulation tasks, are required to further evaluate the effectiveness of this method. Methods And Evaluation Criteria: The proposed method is simple and intuitive: it treats offline and online data as two labeled categories, enabling simultaneous sampling of both types with a single diffusion training process; it avoids using an additional pre-trained classifier, allowing flexible data augmentation to adapt to varying data distributions in different RL tasks. Theoretical Claims: no need to check the correctness of theoretical claims. Experimental Designs Or Analyses: Experimental results in more challenging environments, e.g., robotic manipulation tasks, are required to further evaluate the effectiveness of this method. Supplementary Material: I have checked the Appendix section. Relation To Broader Scientific Literature: The proposed method is closely related to the realm of generative modelling and directly adopts the mature techniques that are developed by classifier-free diffusion models to address the offline and online data generation in O2O RL. Essential References Not Discussed: no Other Strengths And Weaknesses: Although the proposed method is simple and intuitive, its technical novelty and insights are limited. Moreover, I expect that more experimental results can be conducted in more challenging offline RL benchmarks. Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! Below, we provide a detailed response to your questions and comments. If any of our responses fail to sufficiently address your concerns, please inform us, and we will promptly follow up. **[W1] Technical Novelty and Insights** While our method is simple and intuitive, its effectiveness lies in its ability to improve offline-to-online RL performance across various tasks. By leveraging classifier-free guidance, our approach enables better alignment between generated data and the online policy, leading to substantial performance gains. **[W2] Additional Experiments on Challenging Offline RL Benchmarks** To further validate our method, we conducted additional experiments on **Adroit tasks**, which are widely recognized as challenging benchmarks for offline RL due to their high-dimensional action space and complex dexterous manipulation requirements. We compared CFDG against EDIS [1], a state-of-the-art model-based approach, as well as standard baselines. The results demonstrate that CFDG achieves over 50% improvement over both baselines and EDIS on Adroit tasks, further confirming its effectiveness in robotic manipulation scenarios. | Dataset | Base | EDIS | CFDG | | ----------------- | :-------: | :-------: | :-------: | | relocate-human-v1 | 0.4±0.4 | 0.2±0.6 | 1.5±1.8 | | pen-human-v1 | 72.2±64.0 | 73.4±10.3 | 96.8±59.4 | | door-human-v1 | 4.5±7.9 | 6.20±3.2 | 31.7±6.5 | | **Average** | 25.7 | 26.6 | 43.3 | [1] Energy-guided diffusion sampling for offline-to-online reinforcement learning.
Summary: This paper introduces CFDG, a framework that applies data augmentation to both offline and online datasets in offline-to-online algorithms. Claims And Evidence: The paper conducts extensive experiments on the D4RL dataset to evaluate the effectiveness of the proposed framework. Methods And Evaluation Criteria: Yes, the evaluation criteria adopted are a common practice in this area. Theoretical Claims: No major theoretical claims are presented in the paper. Experimental Designs Or Analyses: Yes, I have reviewed the experimental designs used to evaluate the effectiveness of the proposed framework on the D4RL benchmarks. Additionally, the ablation studies make sense. Supplementary Material: No supplementary materials provided. Relation To Broader Scientific Literature: This paper falls within the area of offline-to-online RL algorithms and introduces a data augmentation approach to enrich existing offline and online datasets. Essential References Not Discussed: Current references are appropriate. Other Strengths And Weaknesses: No further comments. Other Comments Or Suggestions: No further comments. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable review! If you have any additional comments, please feel free to share them—we would be happy to address any questions or clarifications you may have.
Summary: This paper introduces Classifier-Free Diffusion Generation (CFDG), a model-based data augmentation method for offline-to-online RL. The key idea is to train a diffusion-based data generation model with classifier-free guidance to differentiate between online and offline data. The generated data is then used to augment real data during online RL. In the experiments, CFDG was integrated with several RL methods, and was also compared to other data augmentation approaches, demonstrating performance improvements Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical components were presented. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper will contribute to the growing use of generative models (specifically diffusion models) within RL. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The idea of leveraging classifier-free guidance for data generation is simple but clear. - The method has the flexibility to be combined with various RL methods. - The presented results are compelling, as they show improvements over several RL methods as well as other data augmentation techniques. Weaknesses / Questions: There are several weaknesses and parts that are unclear to me: - In Section 4.1, while the method was combined with several offline-to-online RL methods, in principle, it can also be integrated with other online RL algorithms that leverage offline data. Would it be possible to evaluate whether the proposed approach can improve SOTA sample-efficient RL algorithms, such as RLPD [1]? - While the method uses classifier-free guidance, what value is used for the guidance weight when generating offline/online data? Is the performance sensitive to this value? It would also be interesting to provide further ablation studies on this parameter, as it directly influences how online or offline the generated data is. - During the offline pre-training phase, why is the diffusion model not trained and used for augmentation, but instead only introduced after the online phase begins? How would the performance be affected if the diffusion model were also pre-trained and used from the beginning of the offline phase? [1] Ball et al., Efficient Online Reinforcement Learning with Offline Data, 2023 Other Comments Or Suggestions: N/A Questions For Authors: See the Weaknesses section Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and valuable suggestions! Below, we provide a detailed response to your questions and comments. If any of our responses fail to sufficiently address your concerns, please inform us, and we will promptly follow up. **[W1] Integration with Other Online RL Algorithms** Although our primary focus is offline-to-online RL, our approach can be easily integrated with other online RL algorithms that leverage offline data. The adaptive Policy Learning (APL) algorithm [1], for instance, uses different update strategies for online and offline data in the online phase—employing online RL updates for online data. If APL is not pre-trained offline, it can be considered as an online RL algorithm utilizing offline data. In our experiments with locomotion tasks (halfcheetah, hopper, and walker2d) using APL, we observed that omitting offline pretraining led to a decline in performance. However, our CFDG method still performed well even in the absence of pretraining, demonstrating its robustness. | Dataset | Base | CFDG | Base w/o offline pretrain | CFDG w/o offline pretrain | | ----------- | :---------: | :---------: | :-----------------------: | :-----------------------: | | Halfcheetah | $85.5±22.5$ | $95.8±9.8$ | $72.0±24.4$ | $92.5±8.3$ | | Hopper | $86.5±16.0$ | $85.3±16.8$ | $83.8±21.1$ | $84.2±17.3$ | | Walker2d | $71.3±18.3$ | $89.5±21.3$ | $70.3±25.2$ | $79.8±19.3$ | | **Average** | $81.1$ | $90.2$ | $75.4$ | $85.5$ | All results are averaged over the four datasets and are assessed across 5 random seeds. **[W2] Guidance Weight in Classifier-Free Guidance** In our classifier-free guidance approach, the guidance weight is set to 1. We have conducted an ablation study where we varied the guidance weight from 0 to 5 (in increments of 0.1). The results indicated that the performance is largely unaffected by this parameter, except when the guidance weight is set to 0. In this case, the diffusion model becomes unconditional, leading to a noticeable drop in performance. **[W3] Use of Diffusion Model in Offline Pretraining Phase** We also experimented with using the diffusion model for data augmentation during the offline pretraining phase. While this approach improved the performance immediately after offline pretraining, the benefits were not carried over once the online phase began. | Dataset | CFDG | CFDG w/ DA in offline phase | | ----------- | :---------------------: | :-------------------------: | | Halfcheetah | $48.2 \rightarrow 74.5$ | $51.1 \rightarrow 73.8$ | | Hopper | $37.7 \rightarrow 74.0$ | $65.3 \rightarrow 75.1$ | | Walker2d | $52.1 \rightarrow 85.1$ | $59.7 \rightarrow 84.3$ | The base algorithm of our experiment is IQL. All results are averaged over the four datasets and are assessed across 5 random seeds. As shown in our results ($\text{A} \rightarrow \text{B}$ where $\text{A}$ represents the offline pretraining score and $\text{B}$ represents the online fine-tuning score), using the diffusion model for data augmentation in the offline phase did not result in sustained improvements in the online phase. This can be attributed to the fact that the performance bottleneck in offline-to-online RL is mainly determined by the quality of online data. Our CFDG method addresses this issue by using the diffusion process to learn the distributional differences between offline and online data, ultimately producing data that aligns more closely with the online policy. This alignment is critical for improving the upper performance limit of the agent. [1] Adaptive policy learning for offline-to-online reinforcement learning.
null
null
null
null
null
null
An Analysis for Reasoning Bias of Language Models with Small Initialization
Accept (spotlight poster)
Summary: This paper aims to investigate how different initialization scales (small vs. large) may affect transformer models' ability to learn in different tasks (specifically in reasoning and memorization). The study pretrains a GPT-2 model with separately from smaller to larger initialization scales on reasoning and memorization tasks, where the datasets are both synthetic and real. The study finds an explicit bias that a smaller initialization scale of the transformer model can learn reasoning tasks much faster and generalizes better, while a larger initialization scale would make learning to memorize faster, but also reasoning worse. The authors used a variety of approaches to investigate the mechanisms of such bias. Specifically: 1. Theoretical analysis. The authors apply an Emb-MLP Model to analyze gradient dynamics as well as empirical representation evolutions. Results show that smaller initialization can amplify the gradient differences based on structured label distributions of reasoning tasks. Moreover, reasoning anchors quickly differentiate themselves while memory mappings do not. They also confirm this theoretical proposition by finding similar results from empirical results. 2. Transformer model analysis. The authors analyzed transformer embeddings and found that the structured clusters quickly form in the training of reasoning tasks. The authors also analyzed the attention matrices and proposed that the smaller initialization makes attention approximate the averaging operations. Overall, this paper discovered that smaller initialization scales of transformer models can be good for reasoning and investigated the mechanisms underlying this discovery. Claims And Evidence: The authors claim that smaller initialization scale of transformer models can be good for learning reasoning tasks, while larger initialization scale can be better for memorization tasks. In the experiments, results did show that with smaller initialization, the transformer models learn faster and generalize better in the training of reasoning tasks, while larger scale initilization makes reasoning generalize worse but memorization faster. Methods And Evaluation Criteria: The study uses two styles of task and datasets to evaluate the training of reasoning and memorization. One is a synthetic compositional task and random numbered tasks. They are all sequences of numerical tokens and designed in the same order. However, the compositional task's output is an arithmetic mapping (e.g., addition) of keys while the memorization task's output only requires reproduction of the keys. This design has minimal differences of token distribution while requiring differences in the computation. Another type of datasets are benchmark datasets used for reasoning and mapping tasks. The authors analyze embeddings of models trained on these two datasets and found the former one exhibits significant hierarchical structure of logic representation in the reasoning dataset, while the latter one does not. Overall, the authors used both empirical and synthetic datasets to evaluate the model's learning in reasoning and memorization, which are robust and thorough. Theoretical Claims: Based on my limited expertise and understanding of transformer computations, I did not find any explicit incorrect proof. Experimental Designs Or Analyses: The analysis goes thoroughly from synthetic dataset to empirical dataset, from representation findings to theoretical analysis, which are robust and comprehensive. Supplementary Material: Yes. I reviewed mainly the experimental and task parts to check how these setups can validate the reasoning and memorization. These supplementary materials support the claims made in the main texts. Relation To Broader Scientific Literature: The paper proposes an important factor - the size of initialization of pre-training of a transformer-based language model in the reasoning and memorization task. This paper provides critical empirical and theoretical evidence that smaller initialization can work faster and better for the learning of reasoning models. This can be a both practical and fundamental concern of building better reasoning models, as well as understanding how reasoning models can work/fail to work. Essential References Not Discussed: I am not aware of any essential reference omitted by the authors. Other Strengths And Weaknesses: The paper's visualization as well as structure is neat and easy to understand. Other Comments Or Suggestions: No other comments. Questions For Authors: Though may be out of the paper's scope, I still wonder in a typical pre-training setting, we used simply next token prediction to calculate the loss and optimize. I was wondering if this small initialization effect can generalize in reinforcement learning algorithms such as GRPO or PPO. The authors are not required to supplement any experiments. Sharing intuitions and insights into this topic would be interesting. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review of our work and your valuable feedback and questions. Your recognition and support for our research have been immensely encouraging. We address your concerns as follows. **Question**: I was wondering if this small initialization effect can generalize in reinforcement learning algorithms such as GRPO or PPO. **Response**: It is important to study how does initialization scale impact the training behavior in reinforcement learning (RL). We provide some preliminary intuitions: 1. The core of small initialization lies in reducing model complexity, compelling the pre-trained model to fit datasets through simpler, more generalizable patterns. With small initialization, a model can learn more reasoning patterns instead of memory patterns. RL can play a critical role in enhancing the reasoning patterns which are learned in the pre-training stage. On the other hand, employing small initialization for training reward model (RM) might improve the generalizability of RM scoring since small initialization could reduce overfitting to training data noise. 2. Building on the complexity reduction principle, other complexity-reduction techniques such as larger weight decay could be integrated into the RL stage to further enhance generalization. 3. In some post-training workflows, parameters of the pre-trained model are directly updated, rendering the concept of initialization irrelevant. However, in specialized approaches like LoRA (Low-Rank Adaptation), new update parameters are introduced and trained independently. As discussed in our response to Reviewer a4C, the initialization of these update parameters significantly influences fine-tuning results. Analogously, in RL settings, an update policy could be designed. Let $\pi=\pi_{pre-train}+\Delta \pi$ where $\Delta \pi $ is the trainable update policy. Then the varying initialization schemes for $\Delta \pi$ might lead to distinct training behaviors. We hope our response could provide clarity and value to your question. Once again, we deeply appreciate your support and constructive feedback for our work. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal from the authors. The authors have well addressed my concerns and I would maintain my current score as acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer e1DG, Thank you for your detailed feedback and for taking the time to review our work. We greatly appreciate your recognition of our efforts, and we are pleased that our responses have addressed your concerns. The Authors.
Summary: The paper investigates initialization scale as one of the driving factors of bias towards different types of tasks. The paper considers two types of tasks: 1. Reasoning - represented by a sum over a key and its anchors given a key. This task is constructed to capture logical/arithmetic relationships, which requires generalization (And can be measured with the test set the authors provide) 2. Memory - represented by retrieving a predetermined random sample corresponding to a key-anchor pair. The memory mapping is entirely random, which means. ithcannot be inferred, only memorized. There are also noise tokens provided to increase the difficulty of the learning problem/control the signal to noise. The impact of initialization scale $\gamma$ (weights are initialized $W\in\mathbb R^{d_1\times d_2}, W_{i,j}\sim\mathcal N(\mu=0,\sigma=d_2^{-\gamma})$ ) upon the relative performance of 1. and 2. is investigated. It is shown that small initializations ($\gamma > 0.5$) biases networks towards reasoning, i.e. the performance of tasks of type 1. accuracy increases more quickly as a function of training epochs than tasks of type 2. that compared to when a large initializations ($\gamma < 0.5$) is used. This phenomenon is demonstrated empirically (transformers + a variation on set2vec, called Emb-MLP), as well as motivated theoretically using differential questions. The theoretical analysis highlights the distribution of task labels as a driving factor between the rates each task is solved. ## Update after rebuttal Score increased to 4: Accept following resolutions to my questions during discussion phase. Claims And Evidence: Update: The author response has resolved claims that were partially supported. The following claims are made, and are supported, or partially supported: ## Smaller initialization cases a "reasoning bias" in synthetic/controlled tasks (supported) In the synthetic experiments, a smaller initialization ($\gamma=0.8$) shows reasoning tasks learning at a faster rate compared to the memorization experiments, whereas a larger initialization ($\gamma=0.3$) has the memorization tasks learning more quickly. This is shown both for transformers (Figure 2A) and Emb-MLP (2B), which is appears like a simple DeepSet variant [1]. [1] Deep Sets https://arxiv.org/abs/1703.06114 ## The reasoning bias due to smaller initialization is a result of embedding space separation difference between the reasoning and memorization tasks in the chosen synthetic/controlled tasks (partially supported) The authors show that reasoning-related tokens (anchors) become more distinct earlier in training than the memory tokens under smaller initialization ($\gamma=0.8$) (see Figure 3A). It takes a longer time at $\gamma=0.8$ for the memory tokens to become distinguishable. Since for the memory task, the memory tokens need to be distinguishable to be correctly retrieved, the embedding space structure learning directly corresponds to the ability to solve the task. I note partially supported here, because we do not know what the rate of the embedding space learning looks like for other initialization scales. If e.g. Figure 3A looks similar for $\gamma=0.3$ I think that would reject this specific claim. This information is not provided in main text or in the appendix. ## The token label distribution affects the evolution of embeddings during training, and leads to the observed different rates of embedding structure discussed above (supported) The authors present in Proposition 1 (+ Equation 5) the flow for the embedding vectors, and demonstrate the dominance by the label distribution. They derivate expressions for the label distributions in the memory task (Equation 7) and reasoning task (Equation 8). The empirical distribution for the reasoning task (Figure 3B, 3C) can be compared to the theoretical expression (Equation 8). They are consistent. The proofs is given in Appendix B.1 and B.2. The proof of Lemma 2, and the distributions of the memory anchor seem correct. I was not able to verify the precise expanded form of the reasoning anchor distribution (Equations 22 - 25). ## The attention mechanisms allows a reasoning bias at small initialization scales (partially supported, unclear) The paper demonstrates this through composition of two observations. 1. The paper demonstrates that the first attention mechanism behaves like an averaging operator. This is shown empirically in Figure 5, and with high probability in Lemma 1 (lines 380-384). 2. The $W_V$ projector has largest singular values aligned with the reasoning anchors, but nearly orthogonal to memory anchors. Consequently, the attention mechanism propagates reasoning anchor information, but not memory anchor information to subsequent tokens. Point 1. is clear to me. The second I can see the evidence for in Figure 5C, and understand the projective behavior following the attention operation. My challenge here is two-fold: 1. The transformer has a residual structure, which enables memory tokens to propagate to subsequent tokens (discussed at line 346, right hand column) 2. The "reasoning bias" discussed in this work relates to the rate at which a network learns certain information, and how this relates to the value of $\gamma$ used. In Figure 5 we are shown a single value of $\gamma$, and (I think) only the result at the end of training. We are not sensitive here to the rate at which phenomena occur, which is critical in the first two claims. I can see that the large $\gamma$ value plays a role in the formal analysis, e.g. $\gamma\rightarrow\infty$ is an assumption in Propositions 2,3. It is unclear to me what happens away from this limit. ## The above observations translate to real world language tasks (partially supported) The paper shows the embedding structure resulting from training a language model on a reasoning task (PrOntoQA) and memory task (TinyStories). The embedding structure presented is consistent with the synthetic analysis. Only partially supported as the primary claim of this paper relates to the effect of initialization $\gamma$ on the reasoning bias of the model (measured as the rate of learning, which can be induced by the methods discussed in the above claims.) The authors only show the results for $\gamma=0.8$ training, and do not show any analysis about how behavior changes as a function of training. To substantiate their claims, the authors need to present results from $\gamma=0.3$ and $\gamma=0.5$, and present any differences in the rates at which things happen for PrOntoQA and TinyStories that follow from the initialization scale change. Methods And Evaluation Criteria: The synthetic tasks constructed, as well as the chosen real-world tasks are sensible. Theoretical Claims: I validated: - Proposition 1 - Equation 7 - Lemma 1 I was not able to validate the remaining theoretical claims due to their size and technical involvement within the reviewing period, however, I have no reason to expect they are incorrect. I also note that while Lemma 1 is true at initialization, I am not sure if Lemma 1 is sufficient to guarantee that the attention mechanism remains an averaging operator throughout training (since in general, gradient updates disrupt initialization conditions). Figure 5A however empirically indicates that the attention mechanism is an averaging operator post-initialization. Experimental Designs Or Analyses: First, the analyses is in in general well done. There are certain aspects missing however, in general to draw a conclusion, in each case we need to see: 1. The $\gamma$ value varied 2. The rates of different phenomena change depending on the value of $\gamma$. This type of analyses is only done for a subset of the experiments, and needs to be shown in all cases. Second, the initialization scale is known to impact learning generally, see e.g. Maximal Update Parameterization ([2]). When we change the initialization scale, we expect to also need to change our learning rates in order to get a reasonable network behavior out. The experiments in the presented paper use the AdamW optimizer with a learning rate of 1.e-5 for every experiment, for every initialization. This is a suboptimal choice, and raises a small question regarding whether the phenomena observed in the paper would change if the learning rate was the optimal learning rate for each initialization. [2] Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer https://arxiv.org/abs/2203.03466 Supplementary Material: I reviewed all of the supplementary material. I was not able to validate every proof, however, as mentioned above in Theoretical Claims. Relation To Broader Scientific Literature: This work is relevant in literature that investigates how networks learn, through either memorizing, or generalizing, e.g. [3], where networks can easily fit random labels when they are unstructured. The current paper shows the effect of initialization on the rate at which a model would memorize or generalize. [3] Understanding deep learning requires rethinking generalization https://arxiv.org/abs/1611.03530 Essential References Not Discussed: I am not aware of any essential missing references, although [3] may be useful to include for wider research framing. [3] Understanding deep learning requires rethinking generalization https://arxiv.org/abs/1611.03530 Other Strengths And Weaknesses: The number of analyses provided from the different angles (empirical with many different measures, and theoretical) provides a rich, well-rounded view for the investigation. Despite the many diagrams, I found the paper slightly challenging to read, potentially due to the sizeable introduction and related works sections. We only arrive at the contributions the others make in Section 3.1, which includes a highly technical presentation of the synthetic task (Appendix A.1 is much easier to follow), which took me significant work to understand the purpose of each aspect of the study. Other Comments Or Suggestions: ## Some suggestions for improving clarity Try and bring Appendix A.1 into main text. If necessary, move some of the related work into the Appendix. Keep only related work elements critical to understanding the paper in main text, ideally directly compared to your own contributions. Provide a roadmap for the reader. In figures, state explicitly which $\gamma$ values are being used, and for which epochs. Do this for every figure. Call out significant findings using \paragraph{...} notation, then back up the finding/observation with following prose. Link to proofs from main text. E.g. in Proposition 1 (line 232) provide a ref to proof in the appendix. Do this for all results in main paper. ## Some suggestions for increasing confidence in claims Present results for a range of $\gamma$ as a function of training for all experiments (see above discussion on claims). Substantiate the LayerNormalization result that does not impact conclusion (line 375). Questions For Authors: 1. How critical is the transformer architecture in Section 4.3? Would a DeepSet [1] solve this task? 2. More generally one can initialize $W\in\mathbb R^{d_1\times d_2}, W_{i,j}\sim\mathcal N(\mu=0,\sigma=c \times d_2^{-\gamma})$ for some width-independent constant $c$ (see [2,4]). Why was the choice made to modify $\gamma$ as the controller of the initializer scale, rather than $c$ (optimal $\gamma$ choice is generally optimizer dependent, whereas $c$ is explicitly optimizer independent, so it would feel like varying $c$ is a more universal choice)? Is there an important difference between varying $\gamma$ or $c$? Is the choice stable across different model sizes? [1] Deep Sets https://arxiv.org/abs/1703.06114 [2] Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer https://arxiv.org/abs/2203.03466 [4] A Spectral Condition for Feature Learning https://arxiv.org/abs/2310.17813 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review of our work and the valuable feedback you have provided. We address your concerns as follows: (**Due to the character limit, we are not able to display specific textual revisions. However, we will carefully revise our manuscript to address each concern.**) **P1: Different phenomena with different $\gamma$.** Given that our primary focus is to investigate the reasoning bias of models under small initialization scales, and due to the page limit, we present results for $\gamma=0.8$ in the main text. In Appendix C, we exhibit analogous analysis of the following components under $\gamma=0.3,0.5$: C.1: The embedding space of Emb-MLP, C.2: The embedding space of Transformer, C.3: The first attention module of Transformer. These analyses establish that varying initialization scales significantly influences the model’s behavior. We sincerely regret this omission of explicit references to Appendix C in the main text which will be provided in Sec.4. **P2: The real world language tasks.** Due to the same reason in P1, we did not include results for large initialization. However, we conducted experiments with $\gamma = 0.3, 0.5, 0.8$. We define the following metric: $\Delta L := \frac{L_{Tinystory}-L_{PrOntoQA}}{L_{PrOntoQA}}$. As $\gamma$ increases, $\Delta L$ exhibits an upward trend, indicating a growing bias for reasoning task (see https://postimg.cc/TpcZgmtw A). Analysis of embedding space during the early training stage (step 5000) aligns with that presented in Appendix. C (see B of the link above). These results will be added to the paper. **P3: The residual structure.** The residual structure operates through position-wise additive operations between two sequences, as:$residual(X,V)_i=X_i+V_i$,which lacks interactions between tokens at distinct positions. **P4: Attention module (figure 5) and Lemma.1.** We sincerely apologize for the omission of indicating epoch number in figure 5. Figure 5 displays the attention structure in the early training stage (epoch 200). By the end of training, the attention module exhibits specific patterns for capturing critical information within sequences. Lemma 1 only guarantees the average operator phenomenon during an early stage. Similar theoretical conclusions have been established in prior work, such as [1]. **P5: The learning rate.** We conducted experiments with lr $\in[1e-5,5e-4]$. The learning bias under different $\gamma$ remains consistent across these configurations (see https://postimg.cc/7CtDSMV6 ). However, when lr increases to 1e-3, training becomes highly unstable, manifesting severe loss spike. **P6: Why assume $\gamma \to \infty$.** The assumption is primarily a technology adopted to enable asymptotic analysis in our theoretical framework. In finite-scale scenarios, we focus on the empirical trends as initialization scale decreases. Actually, for $\gamma\sim 1$, we can already see very clear reasoning bias in the training. **P7: LN does not impact conclusion.** We conducted an experiment with removing the LN module, exhibiting the same phenomena, i.e., smaller initialization scales bias reasoning task (see https://postimg.cc/dLbk0xyn ). Theoretically, we could provide an informal explanation. Since our analysis focuses on the initial stage of training, the mean and std could be approximated by the initial value. Consequently, the gradient flow is just multiplied with a constant 1/std, preserving the main structure of the learning dynamics. We will add these analyses in Appendix. **P8: Question 1.** In Sec.4.3, since noise and key tokens are sampled from the same distribution, the sequence lacks permutation invariance. To identify the key, the model must utilize positional encoding and components capable of cross-positional information exchange, like attention module. However, DeepSets is designed for permutation-invariant set input. It would fail to distinguish the key tokens. This fundamental limitation renders DeepSets unsuitable for this task. **P9: Question 2.** Prior work in [2,3] investigated the impact of $\gamma$ on model dynamics and identified distinct behavioral regimes, which is stable under different model sizes. Inspired by those works, we adopt to adjust $\gamma$. **P10: The suggestion on presentation.** We sincerely appreciate your suggestions and we will carefully consider and adopt these suggestions. **P11: Citation.** We appreciate your notification and will add this citation in Related Works. We once again extend our sincere gratitude for the valuable insights you have provided. We hope that our responses have addressed the concerns you raised. [1] Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis. NeurIPS 2024. [2] Phase diagram for two-layer ReLU neural networks at infinite-width limit. JMLR 2021. [3] Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing. NeurIPS 2024. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my questions and comments, as well as those of the other reviewers. My review and score have been updated. --- Reply to Comment 1.1.1: Comment: Dear Reviewer FCaa, We sincerely appreciate your valuable suggestions and comments on our work. We are profoundly grateful for your recognition of our work and your willingness to update the score. The Authors.
Summary: This paper discusses the impact of initialization of language models on their trained performance on memorization and reasoning tasks. The paper uses proof to show reasoning tasks prefer smaller initialization while memorization tasks prefer large initialization. The authors attribute such behavior to being more differentiated in the embedding space at an early stage, which is further verified by empirical experiments. Claims And Evidence: The paper claims that reasoning tasks prefer smaller initialization while memorization tasks prefer large initialization, which is verified by theory and empirical experiments. Methods And Evaluation Criteria: This paper uses a synthetic dataset to verify their assumption. While data in natural language distribution (human annotation) will make the experiment more convincing, it's acceptable when such resource is absent. Theoretical Claims: I have checked the theoretical claims and think the proof to be convincing. But there is a chance that I miss minor mistakes. Experimental Designs Or Analyses: The experimental design makes sense and supports the theoretical claims. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is related to the impact of model initialization on training performance. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A potential weakness of this paper is not providing direct guidance for current LM fine-tuning as most tasks are now fine-tuned based on pre-trained models. The paper will be significantly more impactful if the discovery can be applied to analyzing the behavior of pre-trained LMs. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your careful reading and evaluation of our work, and we are truly grateful for your recognition of our research. We address your concerns as follows. **Comment**: A potential weakness of this paper is not providing direct guidance for current LM fine-tuning as most tasks are now fine-tuned based on pre-trained models. **Response**: We appreciate this question you raised which is very interesting and valuable. We provide some preliminary intuitions: It is definitely important to investigate the impact of initialization on fine-tuning, and some related studies have already emerged in this field. For instance, [1] examined how parameter initialization in LoRA affects fine-tuning training. The study employed a small initialization scale $\gamma=1$ for matrix A, achieving more efficient feature learning. [2] theoretically explored the dynamic behavior of matrix factorization models under small initialization scales, demonstrating that small initialization reduces model complexity, then enhancing generalization capability. Due to the page limit in our current article, we leave this topic for future work. Actually, we have done some experiments on LoRA and believe our work can naturally be extended to the analysis of fine-tuning. **Comment**: While data in natural language distribution (human annotation) will make the experiment more convincing, it's acceptable when such resource is absent. **Response**: Thanks for your suggestion and we appreciate your understanding. In our experiments, we trained a standard GPT-2 model on two real-world datasets PrOntoQA and TinyStories though very limited resources. We observed that models with small initialization exhibited a learning bias toward reasoning tasks (see Figure 1). A parallel analysis of the embedding spaces for both datasets corroborated this conclusion (Sec.4.4). Thank you for your detailed feedback and for taking the time to review our work again. We are grateful if our responses have addressed your concerns. [1] The Impact of Initialization on LoRA Finetuning Dynamics. NeurIPS 2024. [2] Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion. NeurIPS 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the further explanation. I believe the original version shall be accepted, and the follow-up modifications make the contribution clearer. Thank you! --- Reply to Comment 1.1.1: Comment: Dear Reviewer a4CN, We sincerely appreciate your thorough review and valuable feedback. Thank you for supporting our efforts, and we are pleased with your recognition of the modifications. Your insights have been instrumental in improving this work. The Authors.
null
null
null
null
null
null
null
null